Knowledge without Reason

Is it possible to gain real and valuable knowledge without using reason? Many would scoff at this notion. If an idea can’t be defended on rational grounds, it is either a personal preference that may not be held by others or it is false and irrational. Even if one acknowledges a role for intuition in human knowledge, how can one trust another person’s intuition if that person does not provide reasons for his or her beliefs?

In order to address this issue, let’s first define “reason.” The Encyclopedia Britannica defines reason as “the faculty or process of drawing logical inferences,” that is, the act of developing conclusions through logic. Britannica adds, “Reason is in opposition to sensation, perception, feeling, desire, as the faculty . . .  by which fundamental truths are intuitively apprehended.” The New World Encyclopedia defines reason as “the ability to form and operate upon concepts in abstraction, in accordance with rationality and logic. ” Wikipedia states: “Reason is the capacity of consciously making sense of things, applying logic, and adapting or justifying practices, institutions, and beliefs based on new or existing information.”

Fundamental to all these definitions is the idea that knowledge must be based on explicit concepts and statements, in the form of words, symbols, or mathematics. Since human language is often ambiguous, with different definitions for the same word (I could not even find a single, widely-accepted definition of “reason” in standard reference texts), many intellectuals have believed that mathematics, science, and symbolic logic are the primary means of acquiring the most certain knowledge.

However, there are types of knowledge not based on reason. These types of knowledge are difficult or impossible to express in explicit concepts and statements, but we know that they are types of knowledge because they lead to successful outcomes. In these cases, we don’t know how exactly a successful outcome was reached — that remains a black box. But we can judge that the knowledge is worthwhile by the actor’s success in achieving that outcome. There are at least six types of non-rational knowledge:

 

1. Perceptual knowledge

In a series of essays in the early twentieth century, the American philosopher William James drew a distinction between “percepts” and “concepts.” According to James, originally all human beings, like the lower life forms, gathered information from their environment in the form of perceptions and sensations (“percepts”). It was only later in human evolution that human beings created language and mathematics, which allowed them to form concepts. These concepts categorized and organized the findings from percepts, allowing communication between different humans about their perceptual experiences and facilitating the growth of reason. In James’s words, “Feeling must have been originally self-sufficing; and thought appears as a super-added function, adapting us to a wider environment than that of which brutes take account.” (William James, “Percept and Concept – The Import of Concepts“).

All living creatures have perceptual knowledge. They use their senses and brains, however primitive, to find shelter, find and consume food, evade or fight predators, and find a suitable mate. This perceptual knowledge is partly biologically ingrained and partly learned (habitual), but it is not the conceptual knowledge that reason uses. As James noted, “Conception is a secondary process, not indispensable to life.” (Percept and Concept – The Abuse of Concepts)

Over the centuries, concepts became predominant in human thinking, but James argued that both percepts and concepts were needed to fully know reality. What concepts offered humans in the form of breadth, argued James, it lost in depth. It is one thing to know the categorical concepts “desire,” “fear,” “joy,” and “suffering,” ; it is quite another to actually experience desire, fear, joy, and suffering. Even relatively objective categories such as “water,” “stars,” “trees,” “fire,” and so forth are nearly impossible to adequately describe to someone who has not seen or felt these phenomena. Concepts had to be related to particular percepts in the real world, concluded James, or they were merely empty abstractions.

In fact, most of the other non-rational types of knowledge I am about to describe below appear to be types of perceptual knowledge, insofar as they involve perceptions and sensations in making judgments. But I have broken them out into separate categories for purposes of clarity and explanation.

 

2. Emotional knowledge

In a previous post, I discussed the reality of emotional knowledge by pointing to the studies of Professor of Neuroscience Antonio Damasio (see Descartes’ Error: Emotion, Reason, and the Human Brain). Damasio studied a number of human subjects who had lost the part of their brain responsible for emotions, whether due to an accident or a brain tumor. According to Damasio, these subjects experienced a marked decline in their competence and decision-making capability after losing their emotional capacity, even though their IQs remained above-normal. They did not lose their intellectual ability, but their emotions. And that made all the difference. They lost their ability to make good decisions, to effectively manage their time, and to navigate relationships with other human beings. Their competence diminished and their productivity at work plummeted.

Why was this? According to Damasio, when these subjects lost their emotional capacity, they also lost their ability to value. And when they lost their ability to value, they lost their capacity to assign different values to the options they faced every day, leading to either a paralysis in decision-making or to repeatedly misplaced priorities, focusing on trivial tasks rather than important tasks.

Now it’s true that merely having emotions does not guarantee good decisions. We all know of people who make poor decisions because they have anger management problems, they suffer from depression, or they seem to be addicted to risk-taking. The trick is to have the right balance or disposition of emotions. Consequently, a number of scientists have attempted to formulate “EQ” tests to measure persons’ emotional intelligence.

 

3. Common life / culture

People like to imagine that they think for themselves, and this is indeed possible — but only to a limited extent. We are all embedded in a culture, and this culture consists of knowledge and practices that stretch back hundreds or thousands of years. The average English-language speaker has a vocabulary of tens of thousands of words. So how many of those words has a typical person invented? In most cases, none – every word we use is borrowed from our cultural heritage. Likewise, every concept we employ, every number we add or subtract, every tradition we follow, every moral rule we obey is transmitted to us down through the generations. If we invent a new word that becomes widely adopted, if we come up with an idea that is both completely original and worthy, that is a very rare event indeed.

You may argue, “This may well be true. But you know perfectly well that cultures, or the ‘common life’ of peoples are also filled with superstition, with backwardness, and barbarism. Moreover, these cultures can and do change over time. The use of reason, from the most intelligent people in that culture, has overcome many backward and barbarous practices, and has replaced superstition with science.” To which, I reply, “Yes, but very few people actually have original and valuable contributions to knowledge, and their contributions are often few and in specialized fields. Even these creative geniuses must take for granted most of the culture they have lived in. No one has the time or intelligence to create a plan for an entirely new society. The common life or culture of a society is a source of wisdom that cannot be done away with entirely.”

This is essentially the insight of the eighteenth century philosopher David Hume. According to Hume, philosophers are tempted to critique all the common knowledge of society as being unfounded in reason and to begin afresh with pure deductive logic, as did Descartes.  But this can only end in total skepticism and nihilism. Rather, argues Hume, “true philosophy” must work within the common life. As Donald W. Livingstone, a former professor at Emory University, has explained:

Hume defines ‘true philosophy’ as ‘reflections on common life methodized and corrected.’ . . . The error of philosophy, as traditionally conceived—and especially modern philosophy—is to think that abstract rules or ideals gained from reflection are by themselves sufficient to guide conduct and belief. This is not to say abstract rules and ideals are not needed in critical thinking—they are—but only that they cannot stand on their own. They are abstractions or stylizations from common life; and, as abstractions, are indeterminate unless interpreted by the background prejudices of custom and tradition. Hume follows Cicero in saying that ‘custom is the great guide of life.’ But custom understood as ‘methodized and corrected’ by loyal and skillful participants. (“The First Conservative,” The American Conservative, August 10, 2011)

 

4. Tacit knowledge / Intuition

Is it possible to write a perfect manual on how to ride a bicycle, one that successfully instructs a child on how to get on a bicycle for the first time and ride it perfectly? What about a perfect cookbook, one that turns a beginner into a master chef upon reading it? Or what about reading all the books in the world about art — will that give someone what they need to create great works of art? The answer to all of these questions is of course, “no.” One must have actual experience in these activities. Knowing how to do something is definitely a form of knowledge — but it is a form of knowledge that is difficult or impossible to transmit fully through a set of abstract rules and instructions. The knowledge is intuitive and habitual. Your brain and central nervous system make minor adjustments in response to feedback every time you practice an activity, until you master it as well as you can. When you ride a bike, you’re not consciously implementing a set of explicit rules inside your head, you’re carrying out an implicit set of habits learned in childhood. Obviously, talents vary, and practice can only take us so far. Some people have a natural disposition to be great athletes or artists or chefs. They can practice the same amount as other people and yet leap ahead of the rest.

The British philosopher Gilbert Ryle famously drew a distinction between two forms of knowledge: “knowing how” and “knowing that.” “Knowing how” is a form of tacit knowledge and precedes “knowing that,” i.e., knowing an explicit set of abstract propositions. Although we can’t fully express tacit knowledge in language, symbolic logic, or mathematics, we know it exists, because people can and will do better at certain activities by learning and practicing. But they are not simply absorbing abstract propositions — they are immersing themselves in a community, they are working alongside a mentor, and they are practicing with the guidance of the community and mentor. And this method of learning how also applies to learning how to reason in logic and mathematics. Ryle has pointed out that it is possible to teach a student everything there is to know about logical proofs — and that student may be able to fully understand others’ logical proofs. And yet when it comes to doing his or her own logical proofs, that student may completely fail. The student knows that but does not know how.

A recent article on the use of artificial intelligence in interpreting medical scans points out that it is virtually impossible for humans to be fully successful in interpreting medical scans simply by applying a set of rules. The people who were best at diagnosing medical scans were not applying rules but engaging in pattern recognition, an activity that requires talent and experience but can’t be fully learned in a text. Many times when expert diagnosticians are asked how they came to a certain conclusion, they have difficulty describing their method in words — they may say a certain scan simply “looks funny.” One study described in the article concluded that pattern recognition uses a part of the brain responsible for naming things:

‘[A] process similar to naming things in everyday life occurs when a physician promptly recognizes a characteristic and previously known lesion,’ the researchers concluded. Identifying a lesion was a process similar to naming the animal. When you recognize a rhinoceros, you’re not considering and eliminating alternative candidates. Nor are you mentally fusing a unicorn, an armadillo, and a small elephant. You recognize a rhinoceros in its totality—as a pattern. The same was true for radiologists. They weren’t cogitating, recollecting, differentiating; they were seeing a commonplace object.

Oddly enough, it appears to be possible to teach computers implicit knowledge of medical scans. A computing strategy known as a “neural network” attempts to mimic the human brain by processing thousands or millions of patterns that are fed into the computer. If the computer’s answer is correct, the connection responsible for that answer is strengthened; if the answer is incorrect, that connection is weakened. Over time, the computer’s ability to arrive at the correct answer increases. But there is no set of rules, simply a correlation built up over thousands and thousands of scans. The computer remains a “black box” in its decisions.

 

5. Creative knowledge

It is one thing to absorb knowledge — it is quite another to create new knowledge. One may attend school for 15 or 20 years and diligently apply the knowledge learned throughout his or her career, and yet never invent anything new, never achieve any significant new insight. And yet all knowledge was created by various persons at one point in the past. How is this done?

As with emotional knowledge, creative knowledge is not necessarily an outcome of high intelligence. While creative people generally have an above-average IQ, the majority of creative people do not have a genius-level IQ (upper one percent of the population). In fact, most geniuses do not make significant creative contributions. The reason for this is that new inventions and discoveries are rarely an outcome of logical deduction but of a “free association” of ideas that often occurs when one is not mentally concentrating at all. Of note, creative people themselves cannot precisely describe how they get their ideas. The playwright Neil Simon once said, “I don’t write consciously . . . I slip into a state that is apart from reality.” According to one researcher, “[C]reative people are better at recognizing relationships, making associations and connections, and seeing things in an original way — seeing things that others cannot see.” Moreover, this “free association” of ideas actually occurs most effectively while a person is at rest mentally: drifting off to sleep, taking a bath or shower, or watching television.

Mathematics is probably the most precise and rigorous of disciplines, but mathematical discovery is so mysterious that mathematicians themselves have compared their insights to mysticism. The great French mathematician Henri Poincare believed that the human mind worked subliminally on problems, and his work habit was to spend no more than two hours at a time working on mathematics. Poincare believed that his subconscious would continue working on problems while he conducted other activities, and indeed, many of his great discoveries occurred precisely when he was away from his desk. John von Neumann, one of the best mathematicians of the twentieth century, also believed in the subliminal mind. He would sometimes go to sleep with a mathematical problem on his mind and wake up in the middle of the night with a solution. Reason may be used to confirm or disconfirm mathematical discoveries, but it is not the source of the discoveries.

 

6. The Moral Imagination

Where do moral rules come from? Are they handed down by God and communicated through the sacred texts — the Torah, the Bible, the Koran, etc.? Or can morals be deduced by using pure reason, or by observing nature and drawing objective conclusions, they same way that scientists come to objective conclusions about physics and chemistry and biology?

Centuries ago, a number of philosophers rejected religious dogma but came to the conclusion that it is a fallacy to suppose that reason is capable of creating and defending moral rules. These philosophers, known as the “sentimentalists,” insisted that human emotions were the root of all morals. David Hume argued that reason in itself had little power to motivate us to help others; rather sympathy for others was the root of morality. Adam Smith argued that the basis of sympathy was the moral imagination:

As we have no immediate experience of what other men feel, we can form no idea of the manner in which they are affected, but by conceiving what we ourselves should feel in the like situation. Though our brother is upon the rack, as long as we ourselves are at our ease, our senses will never inform us of what he suffers. They never did, and never can, carry us beyond our own person, and it is by the imagination only that we can form any conception of what are his sensations. . . . It is the impressions of our own senses only, not those of his, which our imaginations copy. By the imagination we place ourselves in his situation, we conceive ourselves enduring all the same torments, we enter as it were into his body, and become in some measure the same person with him, and thence form some idea of his sensations, and even feel something which, though weaker in degree, is not altogether unlike them. His agonies, when they are thus brought home to ourselves, when we have thus adopted and made them our own, begin at last to affect us, and we then tremble and shudder at the thought of what he feels. (The Theory of Moral Sentiments, Section I, Chapter I)

Adam Smith recognized that it was not enough to sympathize with others; those who behaved unjustly, immorally, or criminally did not always deserve sympathy. One had to make judgments about who deserved sympathy. So human beings imagined “a judge between ourselves and those we live with,” an “impartial and well-informed spectator” by which one could make moral judgments. These two imaginations — of sympathy and of an impartial judge — are the real roots of morality for Smith.

__________________________

 

This brings us to our final topic: the role of non-rational forms of knowledge within reason itself.

Aristotle is regarded as the founding father of logic in the West, and his writings on the subject are still influential today. Aristotle demonstrated a variety of ways to deduce correct conclusions from certain premises. Here is one example that is not from Aristotle, but which has been used as an example of Aristotle’s logic:

All men are mortal. (premise)

Socrates is a man. (premise)

Therefore, Socrates is mortal. (conclusion)

The logic is sound, and the conclusion follows from the premises. But this simple example was not at all typical of most real-life puzzles that human beings faced. And there was an additional problem.

If one believed that all knowledge had to be demonstrated through logical deduction, that rule had to be applied to the premises of the argument as well. Because if the premises were wrong, the whole argument was wrong. And every argument had to begin with at least one premise. Now one could construct another argument proving the premise(s) of the first argument — but then the premises of the new argument also had to be demonstrated, and so forth, in an infinite regress.

To get out of this infinite regress, some argued that deduced conclusions could support premises in the same way as the premises supported a conclusion, a type of circular support. But Aristotle rejected this argument as incoherent. Instead, Aristotle offered an argument that to this day is regarded as difficult to interpret.

According to Aristotle, there is another cognitive state, known as “nous.” It is difficult to find an English equivalent of this word, and the Greeks themselves seemed to use different meanings, but the word “nous” has been translated as “insight,” “intuition,” or “intelligence.” According to Aristotle, nous makes it possible to know certain things immediately without going through a process of argument or logical deduction. Aristotle compares this power to perception, noting that we have the power to discern different colors with our eyesight even without being taught what colors are. It is an ingrained type of knowledge that does not need to be taught. In other words, nous is a type of non-rational knowledge — tacit, intuitive, and direct, not requiring concepts!

Is Truth a Type of Good?

[T]ruth is one species of good, and not, as is usually supposed, a category distinct from good, and co-ordinate with it. The true is the name of whatever proves itself to be good in the way of belief. . . .” – William James,  “What Pragmatism Means

Truth is a static intellectual pattern within a larger entity called Quality.” – Robert Prisig, Lila

 

Does it make sense to think of truth as a type of good? The initial reaction of most people to this claim is negative, sometimes strongly so. Surely what we like and what is true are two different things. The reigning conception of truth is known as the “correspondence theory of truth,” which argues simply that in order for a statement to be true it must correspond to reality. In this view, the words or concepts or claims we state must match real things or events, and match them exactly, whether those things are good or not.

The American philosopher William James (1842-1910) acknowledged that our ideas must agree with reality in order to be true. But where he parted company with most of the rest of the world was in what it meant for an idea to “agree.” In most cases, he argued, ideas cannot directly copy reality. According to James, “of many realities our ideas can only be symbols and not copies. . . . Any idea that helps us to deal, whether practically or intellectually, with either the reality or its belongings, that doesn’t entangle our progress in frustrations, that fits, in fact, and adapts our life to the reality’s whole setting, will agree sufficiently to meet the requirement.” He also argued that “True ideas are those we can assimilate, validate, corroborate, and verify.” (“Pragmatism’s Conception of Truth“) Many years later, Robert Pirsig argued in Zen and the Art of Motorcycle Maintenance and Lila that the truths of human knowledge, including science, were developed out of an intuitive sense of good or “quality.”

But what does this mean in practice? Many truths are unpleasant, and reality often does not match our desires. Surely truth should correspond to reality, not what is good.

One way of understanding what James and Pirsig meant is to examine the origins and development of language and mathematics. We use written language and mathematics as tools to make statements about reality, but the tools themselves do not merely “copy” or even strictly correspond to reality. In fact, these tools should be understood as symbolic systems for communication and understanding. In the earliest stages of human civilization, these symbolic systems did try to copy or correspond to reality; but the strict limitations of “corresponding” to reality was in fact a hindrance to the truth, requiring new creative symbols that allowed knowledge to advance.

 

_______________________________

 

The first written languages consisted of pictograms, that is, drawn depictions of actual things — human beings, stars, cats, fish, houses. Pictograms had one big advantage: by clearly depicting the actual appearance of things, everyone could quickly understand them. They were the closest thing to a universal language; anyone from any culture could understand pictograms with little instruction.

However, there were some pretty big disadvantages to the use of pictograms as a written language. Many of the things we all see in everyday life can be clearly communicated through drawings. But there are a lot of ideas, actions, abstract concepts, and details that are not so easily communicated through drawings. How does one depict activities such as running, hunting, fighting, and falling in love, while making it clear that one is communicating an activity and not just a person? How does one depict a tribe, kingdom, battle, or forest, without becoming bogged down in drawing pictograms of all the persons and objects involved? How does one depict attributes and distinguish between specific types of people and specific types of objects? How does one depict feelings, emotions, ideas, and categories? Go through a dictionary at random sometime and see how many words can be depicted in a clear pictogram. There are not many. There is also the problem of differences in artistic ability and the necessity of maintaining standards. Everyone may have a different idea of what a bird looks like and different abilities in drawing a bird.

These limitations led to an interesting development in written language: over hundreds or thousands of years, pictograms became increasingly abstract, to the point at which their form did not copy or correspond to what they represented at all. This development took place across civilizations, as seen is this graphic, in which the top pictograms represent the earliest forms and the bottom ones coming later:

(Source: Wikipedia, https://en.wikipedia.org/wiki/History_of_writing)

Eventually, pictograms were abandoned by most civilizations altogether in favor of alphabets. By using combinations of letters to represent objects and ideas, it became easier for people to learn how to read and write. Instead of having to memorize tens of thousands of pictograms, people simply needed to learn new combinations of letters/sounds. No artistic ability was required.

One could argue that this development in writing systems does not address the central point of the correspondence theory of truth, that a true statement must correspond to reality. In this theory, it is perfectly OK for an abstract symbol to represent something. If someone writes “I caught a fish,” it does not matter if the person draws a fish or uses abstract symbols for a fish, as long as this person, in reality, actually did catch a fish. From the pragmatic point of view, however, the evolution of human symbolic systems toward abstraction is a good illustration of pragmatism’s main point: by making our symbolic systems better, human civilizations were able to communicate more, understand more, educate more, and acquire more knowledge. Pictograms fell short in helping us “deal with reality,” and that’s why written language had to advance above and beyond pictograms.

 

Let us turn to mathematics. The earliest humans were aware of quantities, but tended to depicted quantities in a direct and literal manner. For small quantities, such as two, the ancient Egyptians would simply draw two pictograms of the object. Nothing could correspond to reality better than that. However, for larger quantities, it was hard, tedious work to draw the same pictogram over and over. So early humans used tally marks or hash marks to indicate quantities, with “four” represented as four distinct marks:  | | | | and then perhaps a symbol or pictogram of the object. Again, these earliest depictions of numbers were so simple and direct, the correspondence to reality so obvious, that they were easily understood by people from many different cultures.

In retrospect, tally marks appear to be very primitive and hardly a basis for a mathematical system. However, I argue that tally marks were actually a revolutionary advance in how human beings understood quantities — because for the first time, quantity became an abstraction disconnected from particular objects. One did not have to make distinctions between three cats, three kings, or three bushels of grain; the quantity “three” could be understood on its own, without reference to what it was representing. Rather than drawing three cats, three kings, or three bushels of grain, one could use | | |  to represent any group of three objects.

The problem with tally marks, of course, was that this system could not easily handle large quantities or permit complex calculations. So, numerals were invented. The ancient Egyptian numeral system used tally marks for numbers below ten, but then used other symbols for larger quantities: ten, hundred, thousand, and so forth.

The ancient Roman numeral system also evolved out of tally marks, with | | | or III representing “three,” but with different symbols for five (V), ten (X), fifty (L), hundred (C), five hundred (D), and thousand (M). Numbers were depicted by writing the largest numerical symbols on the left and the smallest to the right, adding the symbols together to get the quantity (example: 1350 = MCCCL); a smaller numerical symbol to the left of a larger numerical symbol required subtraction (example: IX = 9). As with the Egyptian system, Roman numerals were able to cope with large numbers, but rather than the more literal depiction offered by tally marks, the symbols were a more creative interpretation of quantity, with implicit calculations required for proper interpretation of the number.

The use of numerals by ancient civilizations represented a further increase in the abstraction of quantities. With numerals, one could make calculations of almost any quantity of any objects, even imaginary objects or no objects. Teachers instructed children how to use numerals and how to make calculations, usually without any reference to real-world objects. A minority of intellectuals studied numbers and calculations for many years, developing general theorems about the relationships between quantities. And before long, the power and benefits of mathematics became such that mathematicians became convinced that mathematics were the ultimate reality of the universe, and not the actual objects we once attached to numbers. (On the theory of “mathematical Platonism,” see this post.)

For thousands of years, Roman numerals continued to be used. Rome was able to build and administer a great empire, while using these numerals for accounting, commerce, and engineering. In fact, the Romans were famous for their accomplishments in engineering. It was not until the 14th century that Europe began to discover the virtues of the Hindu-Arabic numeral system. And although it took centuries more, today the Hindu-Arabic system is the most widely-used system of numerals in the world.

Why is this?

The Hindu-Arabic system is noted for two major accomplishments: its positional decimal system and the number zero. The “positional decimal system” simply refers to a base 10 system in which the value of a digit is based upon it’s position. A single numeral may be multiplied by ten or one hundred or one thousand, depending on its position in the number. For example, the number 832 is:  8×100 + 3×10 + 2. We generally don’t notice this, because we spent years in school learning this system, and it comes to us automatically that the first digit “8” in 832 means 8 x 100. Roman numerals never worked this way. The Romans grouped quantities in symbols representing ones, fives, tens, fifties, one hundreds, etc. and added the symbols together. So the Roman version of 832 is DCCCXXXII (500 + 100 + 100 + 100 + 10+ 10 + 10 + 1 + 1).

Because the Roman numeral system is additive, adding Roman numbers is easy — you just combine all the symbols. But multiplication is harder, and division is even harder, because it’s not so easy to take apart the different symbols. In fact, for many calculations, the Romans used an abacus, rather than trying to write everything down. The Hindu-Arabic system makes multiplication and division easy, because every digit, depending on its placement, is a multiple of 1, 10, 100, 1000, etc.

The invention of the positional decimal system took thousands of years, not because ancient humans were stupid, but because symbolizing quantities and their relationships in a way that is useful is actually hard work and requires creative interpretation. You just don’t look at nature and say, “Ah, there’s the number 12, from the positional decimal system!”

In fact, even many of the simplest numbers took thousands of years to become accepted. The number zero was not introduced to Europe until the 11th century and it took several more centuries for zero to become widely used. Negative numbers did not appear in the west until the 15th century, and even then, they were controversial among the best mathematicians until the 18th century.

The shortcomings of seeing mathematical truths as a simple literal copying of reality become even clearer when one examines the origins and development of weights and measures. Here too, early human beings started out by picking out real objects as standards of measurement, only to find them unsuitable in the long run. One of the most well-known units of measurement in ancient times was the cubit, defined as the length of a man’s forearm from elbow to the tip of the middle finger. The foot was defined as the length of a man’s foot. The inch was the width of a man’s thumb. A basic unit of weight was the grain, that is, a single grain of barley or wheat. All of these measures corresponded to something real, but the problem, of course, was that there was a wide variation in people’s body parts, and grains could also vary in weight. What was needed was standardization; and it was not too long before governing authorities began to establish common standards. In many places throughout the world, authorities agreed that a single definition of each unit, based on a single object kept in storage, would be the standard throughout the land. The objects chosen were a matter of social convention, based upon convenience and usefulness. Nature or reality did not simply provide useful standards of measurement; there was too much variation even among the same types of objects provided by nature.

 

At this point, advocates of the correspondence theory of truth may argue, “Yes, human beings can use a variety of symbolic systems, and some are better than others. But the point is that symbolic systems should all represent the same reality. No matter what mathematical system you use, two plus two should still equal four.”

In response, I would argue that for very simple questions (2+2=4), the type of symbolic system you use will not make a big difference — you can use tally marks, Roman numerals, or Hindu-Arabic numerals. But the type of symbolic system you use will definitely make a difference in how many truths you can uncover and particularly how many complicated truths you can grasp. Without good symbolic systems, many truths will remain forever hidden from us.  As it was, the Roman numeral system was probably responsible for the lack of mathematical accomplishments of the Romans, even if their engineering was impressive for the time. And in any case, the pragmatic theory of truth already acknowledges that truth must agree with reality — it just cannot be a copy of reality. In the words of William James, an ideal symbolic system “helps us to deal, whether practically or intellectually, with either the reality or its belongings . . . doesn’t entangle our progress in frustrations, that fits, in fact, and adapts our life to the reality’s whole setting.”(“Pragmatism’s Conception of Truth“)

Scientific Evidence for the Benefits of Faith

Increasingly, scientific studies have recognized the power of positive expectations in the treatment of people who are suffering from various illnesses. The so-called “placebo” effect is so powerful that studies generally try to control for it: fake pills, fake injections, or sometimes even fake surgeries will be given to one group while another group is offered the “real” treatment. If the real drug or surgery is no better than the fake drug/surgery, then the treatment is considered a failure. What has not been recognized until relatively recently is how the power of positive expectations should be considered as a form of treatment in itself.

Recently, Harvard University has established a Program in Placebo Studies and the Therapeutic Encounter in order to study this very issue. For many scientists, the power of the placebo has been a scandal and an embarrassment, and the idea of offering a “fake” treatment to a patient seems to go against every ethical and professional principle. But the attitude of Ted Kaptchuk, head of the Harvard program, is that if something works, it’s worth studying, no matter how crazy and irrational it seems.

In fact, “crazy” and “irrational” seem to be apt words to describe the results of research on placebos. Researchers have found differences in the effectiveness of placebos based merely on appearance — large pills are more effective than small pills; two pills are better than one pill; “brand name” pills are more effective than generics; capsules are better than pills; and injections are the most effective of all! Even the color of pills affects the outcome. One study found that the most famous anti-anxiety medication in the world, Valium, has no measurable effect on a person’s anxiety unless the person knows he or she is taking it (see “The Power of Nothing” in the Dec. 12 2011 New Yorker). The placebo is probably the oldest and simplest form of “faith healing” there is.

There are scientists who are critical of many of these placebo studies; they believe the power of placebos has been greatly exaggerated. Several studies have concluded that the placebo effect is small or insignificant, especially when objective measures of patient improvement are used instead of subjective self-reports.

However, it should be noted that the placebo effect is not simply a matter of patient feelings that are impossible to measure accurately — there is actually scientific evidence that the human brain manufactures chemicals in response to positive expectations. In the 1970s, it was discovered that people who reported a reduction in pain in response to a placebo were actually producing greater amounts of endorphins, a substance in the brain chemically similar to morphine and heroin that reduces pain and is capable of producing feelings of euphoria (as in the “runner’s high“). Increasingly, studies of the placebo effect have relied on brain scans to actually track changes in the brain in response to a patient receiving a placebo, so measurement of effects is not merely a matter of relying on what a person says. One recent study found that patients suffering from Parkinson’s disease responded better to an “expensive” placebo than a “cheaper” placebo. Patients were given injections containing nothing but saline water, but the arm of patients that was told the saline solution cost $1500 per dose experienced significantly better improvements in motor function than patients that were given a “cheaper” placebo! This happens because the placebo effect boosts the brain’s production of dopamine, which counteracts the effects of Parkinson’s disease. Brain scans have confirmed greater dopamine activation in the brains of those given placebos.

Other studies have confirmed the close relation between the health of the human mind and the health of the body. Excessive stress weakens the immune system, creating an opening for illness. People who regularly practice meditation, on the other hand, can strengthen their immune system and as result, catch colds and the flu less often. The health effects of mediation do not depend on the religion of those practicing it — Buddhist, Christian, Sikh. The mere act of meditation is what it important.

Why has modern medicine been so slow and reluctant to acknowledge the power of positive expectations and spirituality in improving human health? I think it’s because modern science has been based on certain metaphysical assumptions about nature which have been very valuable in advancing knowledge historically, but are ultimately limited and flawed. These assumptions are: (1) Anything that exists solely in the human mind is not real; (2) Knowledge must be based on what exists objectively, that is, what exists outside the mind; and (3) everything in nature is based on material causation — impersonal objects colliding with or forming bonds with other impersonal objects. In many respects, these metaphysical assumptions were valuable in overcoming centuries of wrong beliefs and superstitions. Scientists learned to observe nature in a disinterested fashion, discover how nature actually was and not how we wanted it to be. Old myths about gods and personal spirits shaping nature became obsolete, to be replaced by theories of material causation, which led to technological advances that brought the human race enormous benefits.

The problem with these metaphysical assumptions, however, is that they draw too sharp a separation between the human mind and what exists outside the mind. The human mind is part of reality, embedded in reality. Scientists rely on concepts created by the human mind to understand reality, and multiple, contradictory concepts and theories may be needed to understand reality.  (See here and here). And the human mind can modify reality – it is not just a passive spectator. The mind affects the body directly because it is directly connected to the body. But the mind can also affect reality by directing the limbs to perform certain tasks — construct a house, create a computer, or build a spaceship.

So if the human mind can shape the reality of the body through positive expectations, can positive expectations bring additional benefits, beyond health? According to the American philosopher William James in his essay “The Will to Believe,” a leap of faith could be justified in certain restricted circumstances: when a momentous decision must be made, there is a large element of uncertainty, and there are not enough resources and time to reduce the uncertainty. (See this post.) In James’ view, in some cases, we must take the risk of supposing something is true, lest we lose the opportunity of gaining something beneficial. In short, “Faith in a fact can help create that fact.”

Scientific research on how expectations affect human performance tends to support James’ claim. Performance in sports is often influenced by athletes’ expectations of “good luck.” People who are optimistic and visualize their ideal goals are more likely to actually attain their goals than people who don’t. One recent study found that human performance in a color discrimination task is better when the subjects are provided a lamp that has a label touting environmental friendliness. Telling people about stereotypes before crucial tests affects how well people perform on tests — Asians who are told about how good Asians are at math perform better on math tests; women who are sent the message that women are not as smart perform less well on tests. When golfers are told that winning golf is a matter of intelligence, white golfers improve their performance; when golfers are told that golf is a matter of natural athleticism, blacks do better.

Now, I am not about to tell you that faith is good in all circumstances and that you should always have faith. Applied across the board, faith can hurt you or even kill you. Relying solely on faith is not likely to cure cancer or other serious illnesses. Worshipers in some Pentecostal churches who handle poisonous snakes sometimes die from snake bites. And terrorists who think they will be rewarded in the afterlife for killing innocent people are truly deluded.

So what is the proper scope for faith? When should it be used and when should it not be used? Here are three rules:

First, faith must be restricted to the zone of uncertainty that always exists when evaluating facts. One can have faith in things that are unknown or not fully known, but one should not have faith in things that are contrary to facts that have been well-established by empirical research. One cannot simply say that one’s faith forbids belief in the scientific findings on evolution and the big bang, or that faith requires that one’s holy text is infallible in all matters of history, morals, and science.

Second, the benefits of faith cannot be used as evidence for belief in certain facts. A person who finds relief from Parkinson’s disease by imagining the healing powers of Christ’s love cannot argue that this proves that Jesus was truly the son of God, that Jesus could perform miracles, was crucified, and rose from the dead. These are factual claims that may or may not be historically accurate. Likewise with the golden plates of Joseph Smith that were allegedly the basis for the Book of Mormon or the ascent of the prophet Muhammad to heaven — faith does not prove any of these alleged facts. If there was evidence that one particular religious belief tended to heal people much better than other religious beliefs, then one might devote effort to examining if the facts of that religion were true. But there does not seem to be a difference among faiths — just about any faith, even the simplest faith in a mere sugar pill, seems to work.

Finally, faith should not run unnecessary risks. Faith is a supplement to reason, research, and science, not an alternative. Science, including medical science, works. If you get sick, you should go to a doctor first, then rely on faith. As the prophet Muhammad said, “Tie your camel first, then put your trust in Allah.”

Faith and Truth

The American philosopher William James argued in his essay “The Will to Believe”  that there were circumstances under which it was not only permissible to respond to the problem of uncertainty by making a leap of faith, it was necessary to do so lest one lose the truth by not making a decision.

Most scientific questions, James argued, were not the sort of momentous issues that required an immediate decision.  One could step back, evaluate numerous hypotheses, engage in lengthy testing of such hypotheses, and make tentative, uncertain conclusions that would ultimately be subject to additional testing.  However, outside the laboratory, real-world issues often required decisions to be made on the spot despite a high degree of uncertainty, and not making a decisional commitment ran the same risk of losing the truth as making an erroneous decision.  Discovering truth, wrote James, is not the same as avoiding error, and one who is devoted wholeheartedly to the latter will be apt to make little progress in gaining the truth.

In James’s view, we live in a dynamic universe, not a static universe, and our decisions in themselves affect the likelihood of certain events becoming true.  In matters of love, friendship, career, and morals, the person who holds back from making a decision for fear of being wrong will lose opportunities for affecting the future in a positive fashion.  Anyone who looks back honestly on one’s life can surely admit to lost opportunities of this type.  As James wrote, “[f]aith in a fact can help create the fact.”

Now of course there are many counterexamples of people who have suffered serious loss, injury, and death because they made an unjustified leap of faith.  So one has to carefully consider the possible consequences of being wrong.  But many times, the most negative consequences of making a leap of faith are merely the same type of rejection or failure that would occur if one did not make a decisional commitment at all.

There is a role for skepticism in reason, a very large role, but there are circumstances in which excessive skepticism can lead to a paralysis of the will, leading to certain loss.  Skepticism and faith have to be held in balance, with skepticism applied primarily to low-impact issues not requiring an immediate decision.