The Value of Myth in Depicting the Conflict Between Good and Evil

In October 2013, three young friends living in the Washington DC area — a male-female couple and a male friend — went out to a number of local bars to celebrate a birthday. The friends drank copiously, and then returned to a small studio apartment at 2 a.m. An hour later, one of the men stabbed his male friend to death. When police arrived, they found the surviving male covered in blood, with the floor and wall also covered in blood. “I caught my buddy and my girl cheating,” said the man. “I killed my buddy.” The man was subsequently found guilty of murder and sentenced to life in prison.

How did this happen? Was murder inevitable? It seems unlikely. The killing was not pre-planned. No one in the group had a prior record of violence or criminal activity. All three friends were well-educated and successful, with bright futures ahead of them. It’s true that all were extremely drunk, but drunkenness very rarely leads to murder.

This case is noteworthy, not because murders are unusual — murders happen all the time — but because this particular murder seems to have been completely unpredictable. It’s the normality of the persons and circumstances that disturbs the conscience. Under slightly different circumstances, the murder would not have happened at all, and all three would conceivably have lived long, happy lives.

Most of us are law-abiding citizens. We believe we are good, and despise thieves, rapists, and murderers. But what happens when the normal conditions under which we live change, when we are humiliated or outraged, when there is no security for our lives or property, when our opportunities for happiness are snatched from us for no good reason? How far will we go to avenge ourselves, and what violence will we justify in order to restore our perceived notion of justice?

The conflict between good and evil tendencies within human beings is a frequent theme in both philosophy and religion. However, philosophy has had a tendency to attribute evil tendencies within humanity to a deficiency of reason. In the view of many philosophers, reason alone should be able to establish that human rights are universal, and that impulses to violence, conquest, and enslavement are irrational. Furthermore, they argue that when reason establishes its dominance over the passions within human beings, societies become freer and more peaceful. (Notably, the great philosophers David Hume and Adam Smith rejected this argument.)

The religious interpretation of the conflict between good and evil, on the other hand, is based more upon myth and faith. And while the myths of religion are not literally accurate in terms of history or science, these myths often have insights into the inner turmoil of human beings that are lost in straightforward descriptions of fact and an emphasis on rationality.

The Christian scholar Paul Elmer More argued in his book, The Religion of Plato, that the dualism between good and evil within the human soul was very effectively described by the Greek philosopher Plato, but that this description relied heavily on the picturesque elements of myth, as found in the The Republic, Laws, Timaeus, and other works. In Plato’s view, there was a struggle within all human beings between a higher nature and a lower nature, the higher nature being drawn to a vision of ideal forms and the lower nature being dominated by the flux of human passions and desires. According to More,

It is not that pleasure or pain, or the desires and emotions connected with them, are totally depraved in themselves . . . but they contain the principle of evil in so far as they are radically unlimited, belonging by nature to what in itself is without measure and tends by inertia to endless expansion. Hence, left to themselves, they run to evil, whereas under control they may become good, and the art of life lies in the governing of pleasure and pain by a law exterior to them, in a man’s becoming master of himself, or better than himself. (pp. 225-6)

What are some of the myths Plato discusses? In The Republic, Plato tells the story of Gyges, a lowly shepherd who discovers a magic ring that bestows the power of invisibility. With this invisibility, Gyges is able to go wherever he wants undetected, and to do what he wants without anyone stopping him. Eventually, Gyges kills the king of his country and obtains absolute power for himself. In discussing this story, Glaucon, a student of Socrates, argues that with the awesome power of invisibility, no man would be able to remain just, in light of the benefits one could obtain. However, Socrates responds that being a slave to one’s desires actually does not bring long-term happiness, and that the happy man is one who is able to control his desires.

In the Phaedrus, Plato relates the dialogue between Socrates and his pupil Phaedrus on whether friendship is preferable to love. Socrates discusses a number of myths throughout the dialogue, but appears to use these myths as metaphorical illustrations of the internal struggle within human beings between their higher and lower natures. It is the nature of human beings, Socrates notes, to pursue the good and the beautiful, and this pursuit can be noble or ignoble depending on whether reason is driving one toward enlightenment or desire takes over and drives one to excessive pleasure-seeking. Indeed, Socrates describes love as a type of “madness” — but he argues that this madness is a source of inspiration that can result in either good or evil depending on how one directs the passions. Socrates proceeds to employ a figurative picture of a charioteer driving two horses, with one horse being noble and the other ignoble. The noble horse pulls the charioteer toward heaven, while the ignoble horse pulls the charioteer downward, toward the earth and potential disaster. Even so, the human being in love is influenced by the god he or she follows; the followers of Ares, the god of war, are inclined to violence if they feel wronged by their lover; the followers of Zeus, on the other hand, use love to seek philosophical wisdom.

The nature and purpose of love is also discussed in the Symposium. In this dialogue, Socrates relates a fantastical myth about human beings originally being created with two bodies attached at the back, with two heads, four arms, and four legs. These beings apparently threatened the gods, so Zeus cut the beings in two; henceforth, humans spent their lives trying to find their other halves. Love inspires wisdom and courage, according to the dialogue, but only when it encourages companionship and the exchange of knowledge, and is not merely the pursuit of sexual gratification.

Illustration of the original humans described in Plato’s Symposium:

In the Timaeus, Plato discusses the creation of the universe and the role of human beings in this universe. Everything proceeds from the Good, argued Plato. However, the Good is not some lifeless abstraction, but a power with a dynamic element. According to More, Plato gave the name of God to this dynamic element. God fashions the universe according to an ideal pattern, but the end result is always less than perfect because of the resistance of the materials and the tendency of material things to always fall short of their perfect ends.

Plato argues that there are powers of good and powers of evil in the universe — and within human beings — and Plato personifies these powers as gods or daemons. There is a struggle between good and evil that all humans participate in, and all are subject to judgment at the ends of their lives (Plato believed in reincarnation and posited that deeds in one’s recent life determined one’s station in the next life.) Here, we see myth and faith enter again into Plato’s philosophy, and More defends the use of these stories and symbols as a means of illustrating the dramas of moral conflict:

In this last stage the essential truth of philosophy as a concern of the individual soul, is rendered vivid and convincing by clothing it in the imaginative garb of fiction — fiction which may yet be a veil, more or less transparent, through which we behold the actual events of the spirit world; and this aid of the imagination is needed just because the dualism of the human consciousness cannot be grasped by the reason, demands indeed a certain abatement of that rationalizing tendency of the mind which, if left to itself, inevitably seeks its satisfaction in one or the other form of monism. (p. 199)

What’s fascinating about Plato’s use of myths in philosophy is that while he recognizes that many of the myths are literally dubious or false, they seem to point to truths that are difficult or impossible to express in literal language. Love really does seem to be a desire to unite with one’s missing half, and falling in love really is akin to madness, a madness that can lead to disaster if one is not careful. Humankind does seem to be afflicted by an internal struggle between a higher, noble nature and a lower nature, with the lower nature inclined to self-centeredness and grasping for ever more wealth, power, and pleasure.

Plato had enormous influence on Western civilization, but More argues that the successors to Plato erred by abandoning Plato’s use of myth to illustrate the duality of human nature. Over the years, Greek philosophy became increasingly rationalistic and prone to a monism that was unable to cope with the reality of human dualism. (For an example of this extreme monism, see the works of Plotinus, who argued for an abstract “One” as the ultimate source of all things.) Hence, argued More, Christianity was in fact the true heir of Platonism, and not the Greek philosophers that came after Plato.

Myth is “the drama of religion,” according to More, not a literally accurate description of a sequence of events. Reason and philosophy can analyze and discuss good and evil, but to fully understand the conflict between good and evil, within and between human beings, requires a dramatic depiction of our swirling, churning passions. In More’s words, “A myth is false and reprehensible in so far as it misses or distorts the primary truth of philosophy and the secondary truth of theology; it becomes more probable and more and more indispensable to the full religious life as it lends insistence and reality to those truths and answers to the daily needs of the soul.” (p. 165) The role of Christian myths in illustrating the dramatic conflict between good and evil will be discussed in the next essay.

The Influence of Christianity on Western Culture, Part One: Liberty, Equality, and Human Rights

Does religion have a deep influence on the minds of those living in a largely secular culture, shaping the subconscious beliefs and assumptions of even staunch atheists? Such is the argument of New York Times columnist Ross Douthat, who argues that the contemporary secular liberalism of America and Europe is rooted in the principles of Christianity, and that our civilization suffers when it borrows selectively from Christianity while rejecting the religion as a whole.

Douthat’s provocative claim was challenged by liberal commentators Will Saletan and Julian Sanchez, and if you have time, you can review the three-sided debate here, here, here, here, and here. In brief, Douthat argues the following:

When I look at your secular liberalism, I see a system of thought that looks rather like a Christian heresy, and not necessarily a particularly coherent one at that. In Bad Religion, I describe heresy as a form of belief that tends to emphasize certain elements of the Christian synthesis while downgrading or dismissing other aspects of that whole. And it isn’t surprising that liberalism, which after all developed in a Christian civilization, does exactly that, drawing implicitly on the Christian intellectual inheritance to ground its liberty-equality-fraternity ideals.

Indeed, it’s completely obvious that absent the Christian faith, there would be no liberalism at all. No ideal of universal human rights without Jesus’ radical upending of social hierarchies (including his death alongside common criminals on the cross). No separation of church and state without the gospels’ ‘render unto Caesar’ and St. Augustine’s two cities. No liberal confidence about the march of historical progress without the Judeo-Christian interpretation of history as an unfolding story rather than an endlessly repeating wheel. . . .

And the more purely secular liberalism has become, the more it has spent down its Christian inheritance—the more its ideals seem to hang from what Christopher Hitchens’ Calvinist sparring partner Douglas Wilson has called intellectual ‘skyhooks,’ suspended halfway between our earth and the heaven on which many liberals have long since given up. 

Julian Sanchez, a scholar with the Cato Institute, responds to Douthat by noting that societies don’t need to agree on God and religion to support human rights, only to agree that human rights are good. According to Sanchez, invoking God as the source of goodness doesn’t really solve any problems; at best, it provides one prudential reasons to behave well (i.e., to obtain rewards and avoid punishment in the afterlife). If we believe human rights are good and need to be preserved, the idea of God adds nothing to the belief: “The notion seems to be that someone not (yet) convinced of Christian doctrine would have strong reasons—strong humanistic reasons—to hope for a world in which human dignity and individual rights are respected. But then why aren’t these reasons enough to do the job on their own?” Furthermore, Sanchez argues that morals can be regarded as “normative properties” that are already part of reality, and that secular moralists can appeal to this reality just as easily as believers appeal to God, only normative properties don’t require beliefs about implausible deities and “Middle Eastern folklore.”

Both Douthat and Sanchez make some good arguments, but there are some weaknesses in both sides’ claims that I wish to explore in this extended essay. My view, in brief, is this: Christianity, or any other religion, does not have to be a package deal. Religious claims about various miracles that seem to violate the patterns of nature established by science or the empirical findings of history and archeology should be subject to scrutiny and skepticism like any other claim. Traditional morals that have long-standing religious justifications, from child marriage to slavery, should be subject to the same scrutiny, and rejected when necessary.

And yet, it is difficult to deny the influence of religion on our perceptions — and conceptions — of what is good. I find existing attempts to base human morality and rights solely on reason and science to be unpersuasive; morals are not like the patterns of nature, nor can they be proved by the deductive methods of reason without accepting premises that cannot be proved. While rooted in reality, morals seem to point to something higher than our current reality. And human freedom to choose defies our attempts to prove the existence of morals in the same way that we can prove the deterministic patterns of gravity, chemical reactions, and nuclear fission.

____________________________

Let us consider one such attempt to establish human rights through science and reason by Michael Shermer, director of the The Skeptics Society and founder of Skeptic magazine. In an article for Theology and Science, Shermer attempts to found human rights on reason and science, relying exclusively on “nature and nature’s laws.”

Mr. Shermer begins his essay by noting the many people in Europe that were put to death for the crime of “witchcraft” in the 15th through 17th centuries, and how this witch-hunting hysteria was endorsed by the Catholic Church. Fortunately, notes Mr. Shermer, “scientific naturalism,” the “principle that the world is governed by natural laws and forces that can be understood” and “Enlightenment humanism” arose in the 18th and 19th centuries, destroying the old superstitions of religion. Shermer cites Steven Pinker to explain how the application of scientific naturalism to human affairs provided the principles on which human societies made moral progress:

When a large enough community of free, rational agents confers on how a society should run its affairs, steered by logical consistency and feedback form the world, their consensus will veer in certain directions. Just as we don’t have to explain why molecular biologists discovered that DNA has four bases . . . we may not have to explain why enlightened thinkers would eventually argue against African slavery, cruel punishments, despotic monarchs, and the execution of witches and heretics.

Shermer argues that morals follow logically from reason and observation, and proposes a Principle of Moral Good: “Always act with someone else’s moral good in mind, and never act in a way that it leads to someone else’s moral loss (through force or fraud).”

Unfortunately, this principle, allegedly founded on reason and science, appears to be simply another version of the “Golden Rule,” which has been in existence for over two thousand years, and is found in nearly all the major religions. (The West knows the Golden Rule mainly through Christianity.) None of the religions discovered this rule through science or formal logical deduction. Human rights are not subject to empirical proof like the laws of physics and they don’t follow logically from deductive arguments, unless one begins with premises that support — or even presuppose — the conclusion.

Human rights are a cultural creation. They don’t exist in nature, at least not in a way that we can observe them. To the extent human rights exist, they exist in social practices and laws — sometimes only among a handful of people, sometimes only for certain categories of persons, sometimes widely in society. People can choose to honor and respect human rights, or violate such rights, and do so with impunity.

For this reason, I regard human rights as a transcendent value, something that does not exist in nature, but that many of us regard as worth aspiring to. In a previous essay on transcendence, I noted:

The odd thing about transcendence is that because it seems to refer to a striving for an ideal or a goal that goes above and beyond an observed reality, transcendence has something of an unreal quality. It is easy to see that rocks and plants and stars and animals and humans exist. But the transcendent cannot be directly seen, and one cannot prove the transcendent exists. It is always beyond our reach. . . . We worship the transcendent not because we can prove it exists, but because the transcendent is always drawing us to a higher life, one that excels or supersedes who we already are.

The evils that have human beings have afflicted on other human beings throughout history cannot all be attributed to superstitions and mistaken beliefs, whether about witchcraft or the alleged inferiority of certain races. Far more people have been killed in wars for territory, natural resources, control of trade routes, and for the power to rule than have been killed by accusations of witchcraft. And why not? Is it not compatible with reason to desire wealth and power? The entire basis of economics is that people are going to seek to maximize their wealth. And the basis of modern liberal-democracy is the idea that checks and balances are needed to block excessive power-seeking, that reason itself is insufficient. Historians don’t ask why princes seek to be kings, and why nations seek to expand their territory — it is taken for granted that these desires are inherent to human beings and compatible with reason. As for slavery, it may have been justified by the reference to certain races as inferior, but the pursuit of wealth was the main motivation of slave owners, with the justifications tacked on for appearance’s sake. After all, the fact that states in the American south felt compelled to pass laws forbidding the teaching of blacks indicates that southerners did in fact see blacks as human beings capable of reasoning.

The problem with relying on reason as a basis for human rights is that reason in itself is unable to bridge the gap between desiring one’s own good and desiring the same good for others. It is a highly useful premise in economics and political science that human beings are going to act to maximize their own good. From this premise, many important and useful theories have been developed. Acting for the good of others, on the other hand, particularly when it involves a high degree of self-sacrifice, is extremely variable. It takes place within families, to a limited extent it results in charitable contributions to strangers, and in some cases, soldiers and emergency rescue workers give their lives to save others. But it’s not reason that’s the motivating factor here — it’s love and sympathy and a sense of duty. Reason, on the other hand, is the tool that tells you how much you can give to others without going broke.

Still, it’s one thing to criticize reason as the basis of human rights; it is quite another to provide credit to Christianity for human rights. The historical record of Christianity with regard to human rights is not one that inspires. Nearly all of the Christian churches have been guilty of instigating, endorsing, or tolerating slavery, feudalism, despotism, wars, and torture, for hundreds of years. The record is long and damning.

Still, is it possible that Christianity provided the cultural assumptions, categories, and framework for the eventual flourishing of human rights? After all, neither the American Revolution nor the French Revolution were successful at first in fully implementing human rights. America fought a civil war before slavery was ended, did not allow women to vote until 1920, and did not grant most blacks a consistently recognized right to vote until the 1960s. The French Revolution of 1789 degenerated into terror, dictatorship, and wars of conquest; it took many decades for France to attain a reasonably stable republic. The pursuit of human rights even under secular liberalism was a long, hard struggle, in which ideals only very gradually became fully realized.

This long struggle to implement liberal ideals raises the question: Is it possible that Christianity had a long-term impact on the development of the West that we don’t recognize because we had already absorbed Christian assumptions and premises in our reason and did not question them? This is the question that will be addressed in subsequent posts.

Knowledge without Reason

Is it possible to gain real and valuable knowledge without using reason? Many would scoff at this notion. If an idea can’t be defended on rational grounds, it is either a personal preference that may not be held by others or it is false and irrational. Even if one acknowledges a role for intuition in human knowledge, how can one trust another person’s intuition if that person does not provide reasons for his or her beliefs?

In order to address this issue, let’s first define “reason.” The Encyclopedia Britannica defines reason as “the faculty or process of drawing logical inferences,” that is, the act of developing conclusions through logic. Britannica adds, “Reason is in opposition to sensation, perception, feeling, desire, as the faculty . . .  by which fundamental truths are intuitively apprehended.” The New World Encyclopedia defines reason as “the ability to form and operate upon concepts in abstraction, in accordance with rationality and logic. ” Wikipedia states: “Reason is the capacity of consciously making sense of things, applying logic, and adapting or justifying practices, institutions, and beliefs based on new or existing information.”

Fundamental to all these definitions is the idea that knowledge must be based on explicit concepts and statements, in the form of words, symbols, or mathematics. Since human language is often ambiguous, with different definitions for the same word (I could not even find a single, widely-accepted definition of “reason” in standard reference texts), many intellectuals have believed that mathematics, science, and symbolic logic are the primary means of acquiring the most certain knowledge.

However, there are types of knowledge not based on reason. These types of knowledge are difficult or impossible to express in explicit concepts and statements, but we know that they are types of knowledge because they lead to successful outcomes. In these cases, we don’t know how exactly a successful outcome was reached — that remains a black box. But we can judge that the knowledge is worthwhile by the actor’s success in achieving that outcome. There are at least six types of non-rational knowledge:

 

1. Perceptual knowledge

In a series of essays in the early twentieth century, the American philosopher William James drew a distinction between “percepts” and “concepts.” According to James, originally all human beings, like the lower life forms, gathered information from their environment in the form of perceptions and sensations (“percepts”). It was only later in human evolution that human beings created language and mathematics, which allowed them to form concepts. These concepts categorized and organized the findings from percepts, allowing communication between different humans about their perceptual experiences and facilitating the growth of reason. In James’s words, “Feeling must have been originally self-sufficing; and thought appears as a super-added function, adapting us to a wider environment than that of which brutes take account.” (William James, “Percept and Concept – The Import of Concepts“).

All living creatures have perceptual knowledge. They use their senses and brains, however primitive, to find shelter, find and consume food, evade or fight predators, and find a suitable mate. This perceptual knowledge is partly biologically ingrained and partly learned (habitual), but it is not the conceptual knowledge that reason uses. As James noted, “Conception is a secondary process, not indispensable to life.” (Percept and Concept – The Abuse of Concepts)

Over the centuries, concepts became predominant in human thinking, but James argued that both percepts and concepts were needed to fully know reality. What concepts offered humans in the form of breadth, argued James, it lost in depth. It is one thing to know the categorical concepts “desire,” “fear,” “joy,” and “suffering,” ; it is quite another to actually experience desire, fear, joy, and suffering. Even relatively objective categories such as “water,” “stars,” “trees,” “fire,” and so forth are nearly impossible to adequately describe to someone who has not seen or felt these phenomena. Concepts had to be related to particular percepts in the real world, concluded James, or they were merely empty abstractions.

In fact, most of the other non-rational types of knowledge I am about to describe below appear to be types of perceptual knowledge, insofar as they involve perceptions and sensations in making judgments. But I have broken them out into separate categories for purposes of clarity and explanation.

 

2. Emotional knowledge

In a previous post, I discussed the reality of emotional knowledge by pointing to the studies of Professor of Neuroscience Antonio Damasio (see Descartes’ Error: Emotion, Reason, and the Human Brain). Damasio studied a number of human subjects who had lost the part of their brain responsible for emotions, whether due to an accident or a brain tumor. According to Damasio, these subjects experienced a marked decline in their competence and decision-making capability after losing their emotional capacity, even though their IQs remained above-normal. They did not lose their intellectual ability, but their emotions. And that made all the difference. They lost their ability to make good decisions, to effectively manage their time, and to navigate relationships with other human beings. Their competence diminished and their productivity at work plummeted.

Why was this? According to Damasio, when these subjects lost their emotional capacity, they also lost their ability to value. And when they lost their ability to value, they lost their capacity to assign different values to the options they faced every day, leading to either a paralysis in decision-making or to repeatedly misplaced priorities, focusing on trivial tasks rather than important tasks.

Now it’s true that merely having emotions does not guarantee good decisions. We all know of people who make poor decisions because they have anger management problems, they suffer from depression, or they seem to be addicted to risk-taking. The trick is to have the right balance or disposition of emotions. Consequently, a number of scientists have attempted to formulate “EQ” tests to measure persons’ emotional intelligence.

 

3. Common life / culture

People like to imagine that they think for themselves, and this is indeed possible — but only to a limited extent. We are all embedded in a culture, and this culture consists of knowledge and practices that stretch back hundreds or thousands of years. The average English-language speaker has a vocabulary of tens of thousands of words. So how many of those words has a typical person invented? In most cases, none – every word we use is borrowed from our cultural heritage. Likewise, every concept we employ, every number we add or subtract, every tradition we follow, every moral rule we obey is transmitted to us down through the generations. If we invent a new word that becomes widely adopted, if we come up with an idea that is both completely original and worthy, that is a very rare event indeed.

You may argue, “This may well be true. But you know perfectly well that cultures, or the ‘common life’ of peoples are also filled with superstition, with backwardness, and barbarism. Moreover, these cultures can and do change over time. The use of reason, from the most intelligent people in that culture, has overcome many backward and barbarous practices, and has replaced superstition with science.” To which, I reply, “Yes, but very few people actually have original and valuable contributions to knowledge, and their contributions are often few and in specialized fields. Even these creative geniuses must take for granted most of the culture they have lived in. No one has the time or intelligence to create a plan for an entirely new society. The common life or culture of a society is a source of wisdom that cannot be done away with entirely.”

This is essentially the insight of the eighteenth century philosopher David Hume. According to Hume, philosophers are tempted to critique all the common knowledge of society as being unfounded in reason and to begin afresh with pure deductive logic, as did Descartes.  But this can only end in total skepticism and nihilism. Rather, argues Hume, “true philosophy” must work within the common life. As Donald W. Livingstone, a former professor at Emory University, has explained:

Hume defines ‘true philosophy’ as ‘reflections on common life methodized and corrected.’ . . . The error of philosophy, as traditionally conceived—and especially modern philosophy—is to think that abstract rules or ideals gained from reflection are by themselves sufficient to guide conduct and belief. This is not to say abstract rules and ideals are not needed in critical thinking—they are—but only that they cannot stand on their own. They are abstractions or stylizations from common life; and, as abstractions, are indeterminate unless interpreted by the background prejudices of custom and tradition. Hume follows Cicero in saying that ‘custom is the great guide of life.’ But custom understood as ‘methodized and corrected’ by loyal and skillful participants. (“The First Conservative,” The American Conservative, August 10, 2011)

 

4. Tacit knowledge / Intuition

Is it possible to write a perfect manual on how to ride a bicycle, one that successfully instructs a child on how to get on a bicycle for the first time and ride it perfectly? What about a perfect cookbook, one that turns a beginner into a master chef upon reading it? Or what about reading all the books in the world about art — will that give someone what they need to create great works of art? The answer to all of these questions is of course, “no.” One must have actual experience in these activities. Knowing how to do something is definitely a form of knowledge — but it is a form of knowledge that is difficult or impossible to transmit fully through a set of abstract rules and instructions. The knowledge is intuitive and habitual. Your brain and central nervous system make minor adjustments in response to feedback every time you practice an activity, until you master it as well as you can. When you ride a bike, you’re not consciously implementing a set of explicit rules inside your head, you’re carrying out an implicit set of habits learned in childhood. Obviously, talents vary, and practice can only take us so far. Some people have a natural disposition to be great athletes or artists or chefs. They can practice the same amount as other people and yet leap ahead of the rest.

The British philosopher Gilbert Ryle famously drew a distinction between two forms of knowledge: “knowing how” and “knowing that.” “Knowing how” is a form of tacit knowledge and precedes “knowing that,” i.e., knowing an explicit set of abstract propositions. Although we can’t fully express tacit knowledge in language, symbolic logic, or mathematics, we know it exists, because people can and will do better at certain activities by learning and practicing. But they are not simply absorbing abstract propositions — they are immersing themselves in a community, they are working alongside a mentor, and they are practicing with the guidance of the community and mentor. And this method of learning how also applies to learning how to reason in logic and mathematics. Ryle has pointed out that it is possible to teach a student everything there is to know about logical proofs — and that student may be able to fully understand others’ logical proofs. And yet when it comes to doing his or her own logical proofs, that student may completely fail. The student knows that but does not know how.

A recent article on the use of artificial intelligence in interpreting medical scans points out that it is virtually impossible for humans to be fully successful in interpreting medical scans simply by applying a set of rules. The people who were best at diagnosing medical scans were not applying rules but engaging in pattern recognition, an activity that requires talent and experience but can’t be fully learned in a text. Many times when expert diagnosticians are asked how they came to a certain conclusion, they have difficulty describing their method in words — they may say a certain scan simply “looks funny.” One study described in the article concluded that pattern recognition uses a part of the brain responsible for naming things:

‘[A] process similar to naming things in everyday life occurs when a physician promptly recognizes a characteristic and previously known lesion,’ the researchers concluded. Identifying a lesion was a process similar to naming the animal. When you recognize a rhinoceros, you’re not considering and eliminating alternative candidates. Nor are you mentally fusing a unicorn, an armadillo, and a small elephant. You recognize a rhinoceros in its totality—as a pattern. The same was true for radiologists. They weren’t cogitating, recollecting, differentiating; they were seeing a commonplace object.

Oddly enough, it appears to be possible to teach computers implicit knowledge of medical scans. A computing strategy known as a “neural network” attempts to mimic the human brain by processing thousands or millions of patterns that are fed into the computer. If the computer’s answer is correct, the connection responsible for that answer is strengthened; if the answer is incorrect, that connection is weakened. Over time, the computer’s ability to arrive at the correct answer increases. But there is no set of rules, simply a correlation built up over thousands and thousands of scans. The computer remains a “black box” in its decisions.

 

5. Creative knowledge

It is one thing to absorb knowledge — it is quite another to create new knowledge. One may attend school for 15 or 20 years and diligently apply the knowledge learned throughout his or her career, and yet never invent anything new, never achieve any significant new insight. And yet all knowledge was created by various persons at one point in the past. How is this done?

As with emotional knowledge, creative knowledge is not necessarily an outcome of high intelligence. While creative people generally have an above-average IQ, the majority of creative people do not have a genius-level IQ (upper one percent of the population). In fact, most geniuses do not make significant creative contributions. The reason for this is that new inventions and discoveries are rarely an outcome of logical deduction but of a “free association” of ideas that often occurs when one is not mentally concentrating at all. Of note, creative people themselves cannot precisely describe how they get their ideas. The playwright Neil Simon once said, “I don’t write consciously . . . I slip into a state that is apart from reality.” According to one researcher, “[C]reative people are better at recognizing relationships, making associations and connections, and seeing things in an original way — seeing things that others cannot see.” Moreover, this “free association” of ideas actually occurs most effectively while a person is at rest mentally: drifting off to sleep, taking a bath or shower, or watching television.

Mathematics is probably the most precise and rigorous of disciplines, but mathematical discovery is so mysterious that mathematicians themselves have compared their insights to mysticism. The great French mathematician Henri Poincare believed that the human mind worked subliminally on problems, and his work habit was to spend no more than two hours at a time working on mathematics. Poincare believed that his subconscious would continue working on problems while he conducted other activities, and indeed, many of his great discoveries occurred precisely when he was away from his desk. John von Neumann, one of the best mathematicians of the twentieth century, also believed in the subliminal mind. He would sometimes go to sleep with a mathematical problem on his mind and wake up in the middle of the night with a solution. Reason may be used to confirm or disconfirm mathematical discoveries, but it is not the source of the discoveries.

 

6. The Moral Imagination

Where do moral rules come from? Are they handed down by God and communicated through the sacred texts — the Torah, the Bible, the Koran, etc.? Or can morals be deduced by using pure reason, or by observing nature and drawing objective conclusions, they same way that scientists come to objective conclusions about physics and chemistry and biology?

Centuries ago, a number of philosophers rejected religious dogma but came to the conclusion that it is a fallacy to suppose that reason is capable of creating and defending moral rules. These philosophers, known as the “sentimentalists,” insisted that human emotions were the root of all morals. David Hume argued that reason in itself had little power to motivate us to help others; rather sympathy for others was the root of morality. Adam Smith argued that the basis of sympathy was the moral imagination:

As we have no immediate experience of what other men feel, we can form no idea of the manner in which they are affected, but by conceiving what we ourselves should feel in the like situation. Though our brother is upon the rack, as long as we ourselves are at our ease, our senses will never inform us of what he suffers. They never did, and never can, carry us beyond our own person, and it is by the imagination only that we can form any conception of what are his sensations. . . . It is the impressions of our own senses only, not those of his, which our imaginations copy. By the imagination we place ourselves in his situation, we conceive ourselves enduring all the same torments, we enter as it were into his body, and become in some measure the same person with him, and thence form some idea of his sensations, and even feel something which, though weaker in degree, is not altogether unlike them. His agonies, when they are thus brought home to ourselves, when we have thus adopted and made them our own, begin at last to affect us, and we then tremble and shudder at the thought of what he feels. (The Theory of Moral Sentiments, Section I, Chapter I)

Adam Smith recognized that it was not enough to sympathize with others; those who behaved unjustly, immorally, or criminally did not always deserve sympathy. One had to make judgments about who deserved sympathy. So human beings imagined “a judge between ourselves and those we live with,” an “impartial and well-informed spectator” by which one could make moral judgments. These two imaginations — of sympathy and of an impartial judge — are the real roots of morality for Smith.

__________________________

 

This brings us to our final topic: the role of non-rational forms of knowledge within reason itself.

Aristotle is regarded as the founding father of logic in the West, and his writings on the subject are still influential today. Aristotle demonstrated a variety of ways to deduce correct conclusions from certain premises. Here is one example that is not from Aristotle, but which has been used as an example of Aristotle’s logic:

All men are mortal. (premise)

Socrates is a man. (premise)

Therefore, Socrates is mortal. (conclusion)

The logic is sound, and the conclusion follows from the premises. But this simple example was not at all typical of most real-life puzzles that human beings faced. And there was an additional problem.

If one believed that all knowledge had to be demonstrated through logical deduction, that rule had to be applied to the premises of the argument as well. Because if the premises were wrong, the whole argument was wrong. And every argument had to begin with at least one premise. Now one could construct another argument proving the premise(s) of the first argument — but then the premises of the new argument also had to be demonstrated, and so forth, in an infinite regress.

To get out of this infinite regress, some argued that deduced conclusions could support premises in the same way as the premises supported a conclusion, a type of circular support. But Aristotle rejected this argument as incoherent. Instead, Aristotle offered an argument that to this day is regarded as difficult to interpret.

According to Aristotle, there is another cognitive state, known as “nous.” It is difficult to find an English equivalent of this word, and the Greeks themselves seemed to use different meanings, but the word “nous” has been translated as “insight,” “intuition,” or “intelligence.” According to Aristotle, nous makes it possible to know certain things immediately without going through a process of argument or logical deduction. Aristotle compares this power to perception, noting that we have the power to discern different colors with our eyesight even without being taught what colors are. It is an ingrained type of knowledge that does not need to be taught. In other words, nous is a type of non-rational knowledge — tacit, intuitive, and direct, not requiring concepts!

Two Types of Religion

Debates about religion in the West tend to center around the three monotheistic religions — Judaism, Christianity, and Islam.  However, it is important to note that these three religions are not necessarily typical or representative of religion in general.

In fact, there are many different types of religion, but for purposes of simplicity I would like to divide the religions of the world into two types: revealed religion and philosophical religion.  These two categories are not exclusive, and many religions overlap both categories, but I think it is a useful conceptual divide.

“Revealed religion” has been defined as a “religion based on the revelation by God to man of ideas that he would not have arrived at by his natural reason alone.”  The three monotheistic religions all belong in this category, though there are philosophers and elements of philosophy in these religions as well.  Most debates about religion and science, or religion and reason, assume that all religions are revealed religions.  However, there is another type of religion: philosophical religion.

Philosophical religion can be defined as a set of religious beliefs that are arrived at primarily through reason and dialogue among philosophers.  The founders of philosophical religion put forth ideas on the basis that these ideas are human creations accessible to all and subject to discussion and debate like any other idea.  These religions are found in the far east, and include Confucianism, Taoism, and Hinduism.  However, there are also philosophical religions in the West, such as Platonism or Stoicism, and there have been numerous philosophers who have constructed philosophical interpretations of the three monotheistic religions as well.

There are a number of crucial distinguishing characteristics that separate revealed religion from philosophical religion.

Revealed religion originates in a single prophet, who claims to have direct communication with God.  Even when historical research indicates multiple people playing a role in founding a revealed religion, as well as the borrowing of concepts from other religions, the tradition and practice of revealed religion generally insists upon the unique role of a prophet who is usually regarded as infallible or close to infallible — Moses, Jesus, or Muhammad.  Revealed religion also insists on the existence of God, often defined as a personal, supreme being who has the qualities of omniscience and omnipotence.  (It may seem obvious to many that all religions are about God, but that is not the case, as will be discussed below.)

Faith is central to revealed religion.  Rational argument and evidence may be used to convince others of the merits of a revealed religion, but ultimately there are too many fundamental beliefs in a revealed religion that are either non-demonstrable or contradictory to evidence from science, history, and archeology.  Faith may be used positively, as an aid to making a decision in the absence of clear evidence, so that one does not sustain loss from despair and a paralysis of will; however, faith may also be used negatively, to deny or ignore findings from other fields of knowledge.

The problems with revealed religion are widely known: these religions are prone to a high degree of superstition and many followers embrace anti-scientific attitudes when the conclusions of science refute or contradict the beliefs of revealed religion.  (This is a tendency, not a rule — for example, many believers in revealed religion do not regard a literal interpretation of the Garden of Eden story as central to their beliefs, and they fully accept the theory of evolution.)  Worse, revealed religions appear to be prone to intolerance, oppression of non-believers and heretics, and bloody religious wars.  It seems most likely that this intolerance is the result of a belief system that sees a single prophet as having a unique, infallible relationship to God, with all other religions being in error because they lack this relationship.

Philosophical religion, by contrast, emerges from a philosopher or philosophers engaging in dialogue.  In the West, this role was played by philosophers in ancient Greece and Rome, before their views were eclipsed by the rise of the revealed religion of Christianity.  In the East, philosophers were much more successful in establishing great religions.  In China, Confucius established a system of beliefs about morals and righteous behavior that influenced an entire empire, while Lao Tzu proposed that a mysterious power known as the “Tao” was the source and driving force behind everything.  In India, Hinduism originated as a diverse collection of beliefs by various philosophers, with some unifying themes, but no single creed.

As might be expected, philosophical religions have tended to be more tolerant and cosmopolitan than revealed religions.  Neither Greek nor Roman philosophers were inclined to kill each other over the finer points of Plato’s conception of God or the various schools of Stoicism, because no one ever claimed to have an infallible relationship with an omnipotent being.  In China, Confucianism, Taoism, and Buddhism are not regarded as incompatible, and many Chinese subscribe to elements of two or all three belief systems.  It is rare to ever see a religious war between adherents of philosophical religions.  And although many people automatically equate religion with faith, there is usually little or no role for faith in philosophical religions.

The role of God in philosophical religions is very different from the role of God in revealed religions.  Most philosophers, in east and west, defined God in impersonal terms, or proposed a God that was not omnipotent, or regarded a Creator God as unimportant to their belief system.  For example, Plato proposed that a secondary God known as a “demiurge” was responsible for creating the universe; the demiurge was not omnipotent, and was forced to create a less-than-perfect universe out of the imperfect materials he was given.  The Stoics did not subscribe to a personal God and instead proposed that a divine fire pervaded the universe, acting on matter to bring all things into accordance with reason.  Confucius, while not explicitly rejecting the possibility of God, did not discuss God in any detail, and had no role for divine powers in his teachings.  The Tao of Lao Tzu is regarded as a mysterious power underlying all things, but it is certainly not a personal being.  Finally, the concept of a Creator God is not central to Hinduism; in fact one of the six orthodox schools of Hinduism is explicitly atheistic, and has been for over two thousand years.

There are many virtues to philosophical religion.  While philosophical religion is not immune to the problem of incorrect conceptions and superstition, it does not resist reason and science, nor does it attempt to stamp out challenges to its claims to the same extent as revealed religions.  Philosophical religion is largely tolerant and reasonable.

However, there is also something arid and unsatisfying about many philosophical religions.  The claims of philosophical religion are usually modest, and philosophical religion has cool reason on its side.  But philosophical religion often does not have the emotional and imaginative content of revealed religion, and in these ways it is lacking. The emotional swings and imaginative leaps of revealed religion can be dangerous, but emotion and imagination are also essential to full knowledge and understanding (see here and here).  One cannot properly assign values to things and develop the right course of action without the emotions of love, joy, fear, anger, and sadness.  Without imagination, it is not possible to envision better ways of living.  When confronted with mystery, a leap of faith may be justified, or even required.

Abstractly, I have a great appreciation for philosophical religion, but in practice, I prefer Christianity.  I have the greatest admiration for the love of Christ, and I believe in Christian love as a guide for living.  At the same time, my Christianity is unorthodox and leavened with a generous amount of philosophy.  I question various doctrinal points of Christianity, I believe in evolution, and I don’t believe in miracles that violate the physical laws that have been discovered by science.  I think it would do the world good if revealed religions and philosophical religions recognized and borrowed each other’s virtues.