Is Truth a Type of Good?

[T]ruth is one species of good, and not, as is usually supposed, a category distinct from good, and co-ordinate with it. The true is the name of whatever proves itself to be good in the way of belief. . . .” – William James,  “What Pragmatism Means

Truth is a static intellectual pattern within a larger entity called Quality.” – Robert Prisig, Lila

 

Does it make sense to think of truth as a type of good? The initial reaction of most people to this claim is negative, sometimes strongly so. Surely what we like and what is true are two different things. The reigning conception of truth is known as the “correspondence theory of truth,” which argues simply that in order for a statement to be true it must correspond to reality. In this view, the words or concepts or claims we state must match real things or events, and match them exactly, whether those things are good or not.

The American philosopher William James (1842-1910) acknowledged that our ideas must agree with reality in order to be true. But where he parted company with most of the rest of the world was in what it meant for an idea to “agree.” In most cases, he argued, ideas cannot directly copy reality. According to James, “of many realities our ideas can only be symbols and not copies. . . . Any idea that helps us to deal, whether practically or intellectually, with either the reality or its belongings, that doesn’t entangle our progress in frustrations, that fits, in fact, and adapts our life to the reality’s whole setting, will agree sufficiently to meet the requirement.” He also argued that “True ideas are those we can assimilate, validate, corroborate, and verify.” (“Pragmatism’s Conception of Truth“) Many years later, Robert Pirsig argued in Zen and the Art of Motorcycle Maintenance and Lila that the truths of human knowledge, including science, were developed out of an intuitive sense of good or “quality.”

But what does this mean in practice? Many truths are unpleasant, and reality often does not match our desires. Surely truth should correspond to reality, not what is good.

One way of understanding what James and Pirsig meant is to examine the origins and development of language and mathematics. We use written language and mathematics as tools to make statements about reality, but the tools themselves do not merely “copy” or even strictly correspond to reality. In fact, these tools should be understood as symbolic systems for communication and understanding. In the earliest stages of human civilization, these symbolic systems did try to copy or correspond to reality; but the strict limitations of “corresponding” to reality was in fact a hindrance to the truth, requiring new creative symbols that allowed knowledge to advance.

 

_______________________________

 

The first written languages consisted of pictograms, that is, drawn depictions of actual things — human beings, stars, cats, fish, houses. Pictograms had one big advantage: by clearly depicting the actual appearance of things, everyone could quickly understand them. They were the closest thing to a universal language; anyone from any culture could understand pictograms with little instruction.

However, there were some pretty big disadvantages to the use of pictograms as a written language. Many of the things we all see in everyday life can be clearly communicated through drawings. But there are a lot of ideas, actions, abstract concepts, and details that are not so easily communicated through drawings. How does one depict activities such as running, hunting, fighting, and falling in love, while making it clear that one is communicating an activity and not just a person? How does one depict a tribe, kingdom, battle, or forest, without becoming bogged down in drawing pictograms of all the persons and objects involved? How does one depict attributes and distinguish between specific types of people and specific types of objects? How does one depict feelings, emotions, ideas, and categories? Go through a dictionary at random sometime and see how many words can be depicted in a clear pictogram. There are not many. There is also the problem of differences in artistic ability and the necessity of maintaining standards. Everyone may have a different idea of what a bird looks like and different abilities in drawing a bird.

These limitations led to an interesting development in written language: over hundreds or thousands of years, pictograms became increasingly abstract, to the point at which their form did not copy or correspond to what they represented at all. This development took place across civilizations, as seen is this graphic, in which the top pictograms represent the earliest forms and the bottom ones coming later:

(Source: Wikipedia, https://en.wikipedia.org/wiki/History_of_writing)

Eventually, pictograms were abandoned by most civilizations altogether in favor of alphabets. By using combinations of letters to represent objects and ideas, it became easier for people to learn how to read and write. Instead of having to memorize tens of thousands of pictograms, people simply needed to learn new combinations of letters/sounds. No artistic ability was required.

One could argue that this development in writing systems does not address the central point of the correspondence theory of truth, that a true statement must correspond to reality. In this theory, it is perfectly OK for an abstract symbol to represent something. If someone writes “I caught a fish,” it does not matter if the person draws a fish or uses abstract symbols for a fish, as long as this person, in reality, actually did catch a fish. From the pragmatic point of view, however, the evolution of human symbolic systems toward abstraction is a good illustration of pragmatism’s main point: by making our symbolic systems better, human civilizations were able to communicate more, understand more, educate more, and acquire more knowledge. Pictograms fell short in helping us “deal with reality,” and that’s why written language had to advance above and beyond pictograms.

 

Let us turn to mathematics. The earliest humans were aware of quantities, but tended to depicted quantities in a direct and literal manner. For small quantities, such as two, the ancient Egyptians would simply draw two pictograms of the object. Nothing could correspond to reality better than that. However, for larger quantities, it was hard, tedious work to draw the same pictogram over and over. So early humans used tally marks or hash marks to indicate quantities, with “four” represented as four distinct marks:  | | | | and then perhaps a symbol or pictogram of the object. Again, these earliest depictions of numbers were so simple and direct, the correspondence to reality so obvious, that they were easily understood by people from many different cultures.

In retrospect, tally marks appear to be very primitive and hardly a basis for a mathematical system. However, I argue that tally marks were actually a revolutionary advance in how human beings understood quantities — because for the first time, quantity became an abstraction disconnected from particular objects. One did not have to make distinctions between three cats, three kings, or three bushels of grain; the quantity “three” could be understood on its own, without reference to what it was representing. Rather than drawing three cats, three kings, or three bushels of grain, one could use | | |  to represent any group of three objects.

The problem with tally marks, of course, was that this system could not easily handle large quantities or permit complex calculations. So, numerals were invented. The ancient Egyptian numeral system used tally marks for numbers below ten, but then used other symbols for larger quantities: ten, hundred, thousand, and so forth.

The ancient Roman numeral system also evolved out of tally marks, with | | | or III representing “three,” but with different symbols for five (V), ten (X), fifty (L), hundred (C), five hundred (D), and thousand (M). Numbers were depicted by writing the largest numerical symbols on the left and the smallest to the right, adding the symbols together to get the quantity (example: 1350 = MCCCL); a smaller numerical symbol to the left of a larger numerical symbol required subtraction (example: IX = 9). As with the Egyptian system, Roman numerals were able to cope with large numbers, but rather than the more literal depiction offered by tally marks, the symbols were a more creative interpretation of quantity, with implicit calculations required for proper interpretation of the number.

The use of numerals by ancient civilizations represented a further increase in the abstraction of quantities. With numerals, one could make calculations of almost any quantity of any objects, even imaginary objects or no objects. Teachers instructed children how to use numerals and how to make calculations, usually without any reference to real-world objects. A minority of intellectuals studied numbers and calculations for many years, developing general theorems about the relationships between quantities. And before long, the power and benefits of mathematics became such that mathematicians became convinced that mathematics were the ultimate reality of the universe, and not the actual objects we once attached to numbers. (On the theory of “mathematical Platonism,” see this post.)

For thousands of years, Roman numerals continued to be used. Rome was able to build and administer a great empire, while using these numerals for accounting, commerce, and engineering. In fact, the Romans were famous for their accomplishments in engineering. It was not until the 14th century that Europe began to discover the virtues of the Hindu-Arabic numeral system. And although it took centuries more, today the Hindu-Arabic system is the most widely-used system of numerals in the world.

Why is this?

The Hindu-Arabic system is noted for two major accomplishments: its positional decimal system and the number zero. The “positional decimal system” simply refers to a base 10 system in which the value of a digit is based upon it’s position. A single numeral may be multiplied by ten or one hundred or one thousand, depending on its position in the number. For example, the number 832 is:  8×100 + 3×10 + 2. We generally don’t notice this, because we spent years in school learning this system, and it comes to us automatically that the first digit “8” in 832 means 8 x 100. Roman numerals never worked this way. The Romans grouped quantities in symbols representing ones, fives, tens, fifties, one hundreds, etc. and added the symbols together. So the Roman version of 832 is DCCCXXXII (500 + 100 + 100 + 100 + 10+ 10 + 10 + 1 + 1).

Because the Roman numeral system is additive, adding Roman numbers is easy — you just combine all the symbols. But multiplication is harder, and division is even harder, because it’s not so easy to take apart the different symbols. In fact, for many calculations, the Romans used an abacus, rather than trying to write everything down. The Hindu-Arabic system makes multiplication and division easy, because every digit, depending on its placement, is a multiple of 1, 10, 100, 1000, etc.

The invention of the positional decimal system took thousands of years, not because ancient humans were stupid, but because symbolizing quantities and their relationships in a way that is useful is actually hard work and requires creative interpretation. You just don’t look at nature and say, “Ah, there’s the number 12, from the positional decimal system!”

In fact, even many of the simplest numbers took thousands of years to become accepted. The number zero was not introduced to Europe until the 11th century and it took several more centuries for zero to become widely used. Negative numbers did not appear in the west until the 15th century, and even then, they were controversial among the best mathematicians until the 18th century.

The shortcomings of seeing mathematical truths as a simple literal copying of reality become even clearer when one examines the origins and development of weights and measures. Here too, early human beings started out by picking out real objects as standards of measurement, only to find them unsuitable in the long run. One of the most well-known units of measurement in ancient times was the cubit, defined as the length of a man’s forearm from elbow to the tip of the middle finger. The foot was defined as the length of a man’s foot. The inch was the width of a man’s thumb. A basic unit of weight was the grain, that is, a single grain of barley or wheat. All of these measures corresponded to something real, but the problem, of course, was that there was a wide variation in people’s body parts, and grains could also vary in weight. What was needed was standardization; and it was not too long before governing authorities began to establish common standards. In many places throughout the world, authorities agreed that a single definition of each unit, based on a single object kept in storage, would be the standard throughout the land. The objects chosen were a matter of social convention, based upon convenience and usefulness. Nature or reality did not simply provide useful standards of measurement; there was too much variation even among the same types of objects provided by nature.

 

At this point, advocates of the correspondence theory of truth may argue, “Yes, human beings can use a variety of symbolic systems, and some are better than others. But the point is that symbolic systems should all represent the same reality. No matter what mathematical system you use, two plus two should still equal four.”

In response, I would argue that for very simple questions (2+2=4), the type of symbolic system you use will not make a big difference — you can use tally marks, Roman numerals, or Hindu-Arabic numerals. But the type of symbolic system you use will definitely make a difference in how many truths you can uncover and particularly how many complicated truths you can grasp. Without good symbolic systems, many truths will remain forever hidden from us.  As it was, the Roman numeral system was probably responsible for the lack of mathematical accomplishments of the Romans, even if their engineering was impressive for the time. And in any case, the pragmatic theory of truth already acknowledges that truth must agree with reality — it just cannot be a copy of reality. In the words of William James, an ideal symbolic system “helps us to deal, whether practically or intellectually, with either the reality or its belongings . . . doesn’t entangle our progress in frustrations, that fits, in fact, and adapts our life to the reality’s whole setting.”(“Pragmatism’s Conception of Truth“)

Scientific Evidence for the Benefits of Faith

Increasingly, scientific studies have recognized the power of positive expectations in the treatment of people who are suffering from various illnesses. The so-called “placebo” effect is so powerful that studies generally try to control for it: fake pills, fake injections, or sometimes even fake surgeries will be given to one group while another group is offered the “real” treatment. If the real drug or surgery is no better than the fake drug/surgery, then the treatment is considered a failure. What has not been recognized until relatively recently is how the power of positive expectations should be considered as a form of treatment in itself.

Recently, Harvard University has established a Program in Placebo Studies and the Therapeutic Encounter in order to study this very issue. For many scientists, the power of the placebo has been a scandal and an embarrassment, and the idea of offering a “fake” treatment to a patient seems to go against every ethical and professional principle. But the attitude of Ted Kaptchuk, head of the Harvard program, is that if something works, it’s worth studying, no matter how crazy and irrational it seems.

In fact, “crazy” and “irrational” seem to be apt words to describe the results of research on placebos. Researchers have found differences in the effectiveness of placebos based merely on appearance — large pills are more effective than small pills; two pills are better than one pill; “brand name” pills are more effective than generics; capsules are better than pills; and injections are the most effective of all! Even the color of pills affects the outcome. One study found that the most famous anti-anxiety medication in the world, Valium, has no measurable effect on a person’s anxiety unless the person knows he or she is taking it (see “The Power of Nothing” in the Dec. 12 2011 New Yorker). The placebo is probably the oldest and simplest form of “faith healing” there is.

There are scientists who are critical of many of these placebo studies; they believe the power of placebos has been greatly exaggerated. Several studies have concluded that the placebo effect is small or insignificant, especially when objective measures of patient improvement are used instead of subjective self-reports.

However, it should be noted that the placebo effect is not simply a matter of patient feelings that are impossible to measure accurately — there is actually scientific evidence that the human brain manufactures chemicals in response to positive expectations. In the 1970s, it was discovered that people who reported a reduction in pain in response to a placebo were actually producing greater amounts of endorphins, a substance in the brain chemically similar to morphine and heroin that reduces pain and is capable of producing feelings of euphoria (as in the “runner’s high“). Increasingly, studies of the placebo effect have relied on brain scans to actually track changes in the brain in response to a patient receiving a placebo, so measurement of effects is not merely a matter of relying on what a person says. One recent study found that patients suffering from Parkinson’s disease responded better to an “expensive” placebo than a “cheaper” placebo. Patients were given injections containing nothing but saline water, but the arm of patients that was told the saline solution cost $1500 per dose experienced significantly better improvements in motor function than patients that were given a “cheaper” placebo! This happens because the placebo effect boosts the brain’s production of dopamine, which counteracts the effects of Parkinson’s disease. Brain scans have confirmed greater dopamine activation in the brains of those given placebos.

Other studies have confirmed the close relation between the health of the human mind and the health of the body. Excessive stress weakens the immune system, creating an opening for illness. People who regularly practice meditation, on the other hand, can strengthen their immune system and as result, catch colds and the flu less often. The health effects of mediation do not depend on the religion of those practicing it — Buddhist, Christian, Sikh. The mere act of meditation is what it important.

Why has modern medicine been so slow and reluctant to acknowledge the power of positive expectations and spirituality in improving human health? I think it’s because modern science has been based on certain metaphysical assumptions about nature which have been very valuable in advancing knowledge historically, but are ultimately limited and flawed. These assumptions are: (1) Anything that exists solely in the human mind is not real; (2) Knowledge must be based on what exists objectively, that is, what exists outside the mind; and (3) everything in nature is based on material causation — impersonal objects colliding with or forming bonds with other impersonal objects. In many respects, these metaphysical assumptions were valuable in overcoming centuries of wrong beliefs and superstitions. Scientists learned to observe nature in a disinterested fashion, discover how nature actually was and not how we wanted it to be. Old myths about gods and personal spirits shaping nature became obsolete, to be replaced by theories of material causation, which led to technological advances that brought the human race enormous benefits.

The problem with these metaphysical assumptions, however, is that they draw too sharp a separation between the human mind and what exists outside the mind. The human mind is part of reality, embedded in reality. Scientists rely on concepts created by the human mind to understand reality, and multiple, contradictory concepts and theories may be needed to understand reality.  (See here and here). And the human mind can modify reality – it is not just a passive spectator. The mind affects the body directly because it is directly connected to the body. But the mind can also affect reality by directing the limbs to perform certain tasks — construct a house, create a computer, or build a spaceship.

So if the human mind can shape the reality of the body through positive expectations, can positive expectations bring additional benefits, beyond health? According to the American philosopher William James in his essay “The Will to Believe,” a leap of faith could be justified in certain restricted circumstances: when a momentous decision must be made, there is a large element of uncertainty, and there are not enough resources and time to reduce the uncertainty. (See this post.) In James’ view, in some cases, we must take the risk of supposing something is true, lest we lose the opportunity of gaining something beneficial. In short, “Faith in a fact can help create that fact.”

Scientific research on how expectations affect human performance tends to support James’ claim. Performance in sports is often influenced by athletes’ expectations of “good luck.” People who are optimistic and visualize their ideal goals are more likely to actually attain their goals than people who don’t. One recent study found that human performance in a color discrimination task is better when the subjects are provided a lamp that has a label touting environmental friendliness. Telling people about stereotypes before crucial tests affects how well people perform on tests — Asians who are told about how good Asians are at math perform better on math tests; women who are sent the message that women are not as smart perform less well on tests. When golfers are told that winning golf is a matter of intelligence, white golfers improve their performance; when golfers are told that golf is a matter of natural athleticism, blacks do better.

Now, I am not about to tell you that faith is good in all circumstances and that you should always have faith. Applied across the board, faith can hurt you or even kill you. Relying solely on faith is not likely to cure cancer or other serious illnesses. Worshipers in some Pentecostal churches who handle poisonous snakes sometimes die from snake bites. And terrorists who think they will be rewarded in the afterlife for killing innocent people are truly deluded.

So what is the proper scope for faith? When should it be used and when should it not be used? Here are three rules:

First, faith must be restricted to the zone of uncertainty that always exists when evaluating facts. One can have faith in things that are unknown or not fully known, but one should not have faith in things that are contrary to facts that have been well-established by empirical research. One cannot simply say that one’s faith forbids belief in the scientific findings on evolution and the big bang, or that faith requires that one’s holy text is infallible in all matters of history, morals, and science.

Second, the benefits of faith cannot be used as evidence for belief in certain facts. A person who finds relief from Parkinson’s disease by imagining the healing powers of Christ’s love cannot argue that this proves that Jesus was truly the son of God, that Jesus could perform miracles, was crucified, and rose from the dead. These are factual claims that may or may not be historically accurate. Likewise with the golden plates of Joseph Smith that were allegedly the basis for the Book of Mormon or the ascent of the prophet Muhammad to heaven — faith does not prove any of these alleged facts. If there was evidence that one particular religious belief tended to heal people much better than other religious beliefs, then one might devote effort to examining if the facts of that religion were true. But there does not seem to be a difference among faiths — just about any faith, even the simplest faith in a mere sugar pill, seems to work.

Finally, faith should not run unnecessary risks. Faith is a supplement to reason, research, and science, not an alternative. Science, including medical science, works. If you get sick, you should go to a doctor first, then rely on faith. As the prophet Muhammad said, “Tie your camel first, then put your trust in Allah.”

Faith and Truth

The American philosopher William James argued in his essay “The Will to Believe”  that there were circumstances under which it was not only permissible to respond to the problem of uncertainty by making a leap of faith, it was necessary to do so lest one lose the truth by not making a decision.

Most scientific questions, James argued, were not the sort of momentous issues that required an immediate decision.  One could step back, evaluate numerous hypotheses, engage in lengthy testing of such hypotheses, and make tentative, uncertain conclusions that would ultimately be subject to additional testing.  However, outside the laboratory, real-world issues often required decisions to be made on the spot despite a high degree of uncertainty, and not making a decisional commitment ran the same risk of losing the truth as making an erroneous decision.  Discovering truth, wrote James, is not the same as avoiding error, and one who is devoted wholeheartedly to the latter will be apt to make little progress in gaining the truth.

In James’s view, we live in a dynamic universe, not a static universe, and our decisions in themselves affect the likelihood of certain events becoming true.  In matters of love, friendship, career, and morals, the person who holds back from making a decision for fear of being wrong will lose opportunities for affecting the future in a positive fashion.  Anyone who looks back honestly on one’s life can surely admit to lost opportunities of this type.  As James wrote, “[f]aith in a fact can help create the fact.”

Now of course there are many counterexamples of people who have suffered serious loss, injury, and death because they made an unjustified leap of faith.  So one has to carefully consider the possible consequences of being wrong.  But many times, the most negative consequences of making a leap of faith are merely the same type of rejection or failure that would occur if one did not make a decisional commitment at all.

There is a role for skepticism in reason, a very large role, but there are circumstances in which excessive skepticism can lead to a paralysis of the will, leading to certain loss.  Skepticism and faith have to be held in balance, with skepticism applied primarily to low-impact issues not requiring an immediate decision.