The Dynamic Quality of Henri Bergson

Robert Pirsig writes in Lila that Quality contains a dynamic good in addition to a static good. This dynamic good consists of a search for “betterness” that is unplanned and has no specific destination, but is nevertheless responsible for all progress. Once a dynamic good solidifies into a concept, practice, or tradition in a culture, it becomes a static good. Creativity, mysticism, dreams, and even good guesses or luck are examples of dynamic good in action. Religious traditions, laws, and science textbooks are examples of static goods.

Pirsig describes dynamic quality as the “pre-intellectual cutting edge of reality.” By this, he means that before concepts, logic, laws, and mathematical formulas are discovered, there is process of searching and grasping that has not yet settled into a pattern or solution. For example, invention and discovery is often not an outcome of calculation or logical deduction, but of a “free association of ideas” that tends to occur when one is not mentally concentrating at all. Many creative people, from writers to mathematicians, have noted that they came up with their best ideas while resting, engaging in everyday activities, or dreaming.

Dynamic quality is not just responsible for human creation — it is fundamental to all evolution, from the physical level of atoms and molecules, to the biological level of life forms, to the social level of human civilization, to the intellectual level of human thought. Dynamic quality exists everywhere, but it has no specific goals or plans — it always consists of spur-of-the-moment actions, decisions, and guesses about how to overcome obstacles to “betterness.”

It is difficult to conceive of dynamic quality — by its very nature, it is resistant to conceptualization and definition, because it has no stable form or structure. If it did have a stable form or structure, it would not be dynamic.

However the French philosopher Henri Bergson (1859-1941) provided a way to think about dynamic quality, by positing change as the fundamental nature of reality. (See Beyond the “Mechanism” Metaphor in Physics.) In Bergson’s view, traditional reason, science, and philosophy created static, eternal forms and posited these forms as the foundation of reality — but in fact these forms were tools for understanding reality and not reality itself. Reality always flowed and was impossible to fully capture in any static conceptual form. This flow could best be understood through perception rather than conception. Unfortunately, as philosophy created larger and larger conceptual categories, philosophy tended to become dominated by empty abstractions such as “substance,” “numbers,” and “ideas.” Bergson proposed that only an intuitive approach that enlarged perceptual knowledge through feeling and imagination could advance philosophy out of the dead end of static abstractions.

________________________

The Flow of Time

Bergson argued that we miss the flow of time when we use the traditional tools of science, mathematics, and philosophy. Science conceives of time as simply one coordinate in a deterministic space-time block ruled by eternal laws; mathematics conceives of time as consisting of equal segments on a graph; and philosophers since Plato have conceptualized the world as consisting of the passing shadows of eternal forms.

These may be useful conceptualizations, argues Bergson, but they do not truly grasp time. Whether it is an eternal law, a graph, or an eternal form, such depictions are snapshots of reality; they do not and cannot represent the indivisible flow of time that we experience. The laws of science in particular neglected the elements of indeterminism and freedom in the universe. (Henri Bergson once debated Einstein on this topic). The neglect of real change by science was the result of science’s ambition to foresee all things, which motivated scientists to focus on the repeatable and calculable elements of nature, rather than the genuinely new. (The Creative Mind, Mineola, New York: Dover, 2007, p. 3) Those events that could not be predicted were tossed aside as being merely random or unknowable. As for philosophy, Bergson complained that the eternal forms of the philosophers were empty abstractions — the categories of beauty and justice and truth were insufficient to serve as representations of real experience.

Actual reality, according to Bergson, consisted of “unceasing creation, the uninterrupted upsurge of novelty.” (The Creative Mind, p. 7) Time was not merely a coordinate for recording motion in a determinist universe; time was “a vehicle of creation and choice.” (p. 75) The reality of change could not be captured in static concepts, but could only be grasped intuitively. While scientists saw evolution as a combination of mechanism and random change, Bergson saw evolution as a result of a vital impulse (élan vital) that pervaded the universe. Although this vital impetus possessed an original unity, individual life forms used this vital impetus for their own ends, creating conflict between life forms. (Creative Evolution, pp. 50-51)

Biologists attacked Bergson on the grounds that there was no “vital impulse” that they could detect and measure. But biologists argued from the reductionist premise that everything could be explained by reference to smaller parts, and since there was no single detectable force animating life, there was no “vital impetus.” But Bergson’s premise was holistic, referring to the broader action of organic development from lower orders to higher orders, culminating in human beings. There was no separate force — rather entities organized, survived, and reproduced by absorbing and processing energy, in multiple forms. In the words of one eminent biologist, organisms are “resilient patterns . . . in an energy flow.” There is no separate or unique energy of life – just energy.

The Superiority of Perception over Conception

Bergson believed with William James that all knowledge originated in perception and feeling; as human mental powers increased, conceptual categories were created to organize and generalize what we (and others) discovered through our senses. Concepts were necessary to advance human knowledge, of course. But over time, abstract concepts came to dominate human thought to the point at which pure ideas were conceived as the ultimate reality — hence Platonism in philosophy, mathematical Platonism in mathematics, and eternal laws in science. Bergson believed that although we needed concepts, we also needed to rediscover the roots of concepts in perception and feeling:

If the senses and the consciousness had an unlimited scope, if in the double direction of matter and mind the faculty of perceiving was indefinite, one would not need to conceive any more than to reason. Conceiving is a make-shift when perception is not granted to us, and reasoning is done in order to fill up the gaps of perception or to extend its scope. I do not deny the utility of abstract and general ideas, — any more than I question the value of bank-notes. But just as the note is only a promise of gold, so a conception has value only through the eventual perceptions it represents. . . . the most ingeniously assembled conceptions and the most learnedly constructed reasonings collapse like a house of cards the moment the fact — a single fact rarely seen — collides with these conceptions and these reasonings. There is not a single metaphysician, moreover, not one theologian, who is not ready to affirm that a perfect being is one who knows all things intuitively without having to go through reasoning, abstraction and generalisation. (The Creative Mind, pp. 108-9)

In the end, despite their obvious utility, the conceptions of philosophy and science tend “to weaken our concrete vision of the universe.” (p. 111) But we clearly do not have God-like powers to perceive everything, and we are not likely to get such powers. So what do we do? Bergson argues that instead of “trying to rise above our perception of things” through concepts, we “plunge into [perception] for the purpose of deepening it and widening it.” (p. 111) But how exactly are we to do this?

Enlarging Perception

There is one group of people, argues Bergson, that have mastered the ability to deepen and widen perception: artists. From paintings to poetry to novels and musical compositions, artists are able to show us things and events that we do not directly perceive and evoke a mood within us that we can understand even if the particular form that the artist presents may never have been seen or heard by us before. Bergson writes that artists are idealists who are often absent-mindedly detached from “reality.” But it is precisely because artists are detached from everyday living that they are able to see things that ordinary, practical people do not:

[Our] perception . . . isolates that part of reality as a whole that interests us; it shows us less the things themselves than the use we can make of them. It classifies, it labels them beforehand; we scarcely look at the object, it is enough for us to know which category it belongs to. But now and then, by a lucky accident, men arise whose senses or whose consciousness are less adherent to life. Nature has forgotten to attach their faculty of perceiving to their faculty of acting. When they look at a thing, they see it for itself, and not for themselves. They do not perceive simply with a view to action; they perceive in order to perceive — for nothing, for the pleasure of doing so. In regard to a certain aspect of their nature, whether it be their consciousness or one of their senses, they are born detached; and according to whether this detachment is that of a particular sense, or of consciousness, they are painters or sculptors, musicians or poets. It is therefore a much more direct vision of reality that we find in the different arts; and it is because the artist is less intent on utilizing his perception that he perceives a greater number of things. (The Creative Mind, p. 114)

The Method of Intuition

Bergson argued that the indivisible flow of time and the holistic nature of reality required an intuitive approach, that is “the sympathy by which one is transported into the interior of an object in order to coincide with what there is unique and consequently inexpressible in it.” (The Creative Mind, p. 135) Analysis, as in the scientific disciplines, breaks down objects into elements, but this method of understanding is a translation, an insight that is less direct and holistic than intuition. The intuition comes first, and one can pass from intuition to analysis but not from analysis to intuition.

In his essay on the French philosopher Ravaisson, Bergson underscored the benefits and necessity of an intuitive approach:

[Ravaisson] distinguished two different ways of philosophizing. The first proceeds by analysis; it resolves things into their inert elements; from simplification to simplification it passes to what is most abstract and empty. Furthermore, it matters little whether this work of abstraction is effected by a physicist that we may call a mechanist or by a logician who professes to be an idealist: in either case it is materialism. The other method not only takes into account the elements but their order, their mutual agreement and their common direction. It no longer explains the living by the dead, but, seeing life everywhere, it defines the most elementary forms by their aspiration toward a higher form of life. It no longer brings the higher down to the lower, but on the contrary, the lower to the higher. It is, in the real sense of the word, spiritualism. (p. 202)

From Philosophy to Religion

A religious tendency is apparent in Bergson’s philosophical writings, and this tendency grew more pronounced as Bergson grew older. It is likely that Bergson saw religion as a form of perceptual knowledge of the Good, widened by imagination. Bergson’s final major work, The Two Sources of Morality and Religion (Notre Dame, IN: University of Notre Dame Press, 1977) was both a philosophical critique of religion and a religious critique of philosophy, while acknowledging the contributions of both forms of knowledge. Bergson drew a distinction between “static religion,” which he believed originated in social obligations to society, and “dynamic religion,” which he argued originated in mysticism and put humans “in the stream of the creative impetus.” (The Two Sources of Morality and Religion, p. 179)

Bergson was a harsh critic of the superstitions of “static religion,” which he called a “farrago of error and folly.” These superstitions were common in all cultures, and originated in human imagination, which created myths to explain natural events and human history. However, Bergson noted, static religion did play a role in unifying primitive societies and creating a common culture within which individuals would subordinate their interests to the common good of society. Static religion created and enforced social obligations, without which societies could not endure. Religion also provided comfort against the depressing reality of death. (The Two Source of Morality and Religion, pp. 102-22)

In addition, it would be a mistake, Bergson argued, to suppose that one could obtain dynamic religion without the foundation of static religion. Even the superstitions of static religion originated in the human perception of a beneficent virtue that became elaborated into myths. Perhaps thinking that a cool running spring or a warm fire on the hearth as the actions of spirits or gods were a case of imagination run rampant, but these were still real goods, as were the other goods provided by the pagan gods.

Dynamic religion originated in static religion, but also moved above and beyond it, with a small number of exceptional human beings who were able to reach the divine source: “In our eyes, the ultimate end of mysticism is the establishment of a contact . . . with the creative effort which life itself manifests. This effort is of God, if it is not God himself. The great mystic is to be conceived as an individual being, capable of transcending the limitations imposed on the species by its material nature, thus continuing and extending the divine action.” (pp. 220-21)

In Bergson’s view, mysticism is intuition turned inward, to the “roots of our being , and thus to the very principle of life in general.” (p. 250) Rational philosophy cannot fully capture the nature of mysticism, because the insights of mysticism cannot be captured in words or symbols, except perhaps in the word “love”:

God is love, and the object of love: herein lies the whole contribution of mysticism. About this twofold love the mystic will never have done talking. His description is interminable, because what he wants to describe is ineffable. But what he does state clearly is that divine love is not a thing of God: it is God Himself. (p. 252)

Even so, just as the dynamic religion bases its advanced moral insights in part on the social obligations of static religion, dynamic religion also must be propagated through the images and symbols supplied by the myths of static religion. (One can see this interplay of static and dynamic religion in Jesus and Gandhi, both of whom were rooted in their traditional religions, but offered original teachings and insights that went beyond their traditions.)

Toward the end of his life, Henri Bergson strongly considered converting to Catholicism (although the Church had already placed three of Bergson’s works on its Index of Prohibited Books). Bergson saw Catholicism as best representing his philosophical inclinations for knowing through perception and intuition, and for joining the vital impetus responsible for creation. However, Bergson was Jewish, and the anti-Semitism of 1930s and 1940s Europe made him reluctant to officially break with the Jewish people. When the Nazis conquered France in 1940 and the Vichy puppet government of France decided to persecute Jews, Bergson registered with the authorities as a Jew and accepted the persecutions of the Vichy regime with stoicism. Bergson died in 1941 at the age of 81.

Once among the most celebrated intellectuals in the world, today Bergson is largely forgotten. Even among French philosophers, Bergson is much less known than Descartes, Sartre, Comte, and Foucault. It is widely believed that Bergson lost his debate with Einstein in 1922 on the nature of time. (See Jimena Canales, The Physicist and the Philosopher: Einstein, Bergson, and the Debate that Changed Our Understanding of Time, p. 6) But it is recognized today even among physicists that while Einstein’s conception of spacetime in relativity theory is an excellent theory for predicting the motion of objects, it does not disprove the existence of time and real change. It is also true that Bergson’s writings are extraordinarily difficult to understand at times. One can go through pages of dense, complex text trying to understand what Bergson is saying, get suddenly hit with a colorful metaphor that seems to explain everything — and then have a dozen more questions about the meaning of the metaphor. Nevertheless, Bergson remains one of the very few philosophers who looked beyond eternal forms to the reality of a dynamic universe, a universe moved by a vital impetus always creating, always changing, never resting.

What is “Transcendence”?

You may have noticed number of writings on religious topics that make reference to “transcendence” or “the transcendent.” However, the word “transcendence” is usually not very well defined, if it is defined at all. The Catechism of the Catholic Church makes several references to transcendence, but it’s not completely clear what transcendence means other than the infinite greatness of God, and the fact that God is “the inexpressible, the incomprehensible, the invisible, the ungraspable.” For those who value reason and precise arguments, this vagueness is unsatisfying. Astonishingly, the fifteen volume Catholic Encyclopedia (1907-1914) did not even have an entry on “transcendence,” though it did have an entry on “transcendentalism,” a largely secular philosophy with a variety of schools and meanings. (The New Catholic Encyclopedia in 1967 finally did have an entry on “transcendence.”)

The Oxford English Dictionary defines “transcendence” as “the action or fact of transcending, surmounting, or rising above . . . ; excelling, surpassing; also the condition or quality of being transcendent, surpassing eminence or excellence. . . .” The reference to “excellence” is probably key to understanding what “transcendence” is. In my previous essay on ancient Greek religion, I pointed out that areté, the Greek word for “excellence,” was a central idea of Greek culture and one cannot fully appreciate the ancient Greek pagan religion without recognizing that Greek devotion to excellence was central to their religion. The Greeks depicted their gods as human, but with perfect physical forms. And while the behavior of the Greek gods was often dubious from a moral standpoint, the Greek gods were still regarded as the givers of wisdom, order, justice, love, and all the institutions of human civilization.

The odd thing about transcendence is that because it seems to refer to a striving for an ideal or a goal that goes above and beyond an observed reality, transcendence has something of an unreal quality. It is easy to see that rocks and plants and stars and animals and humans exist. But the transcendent cannot be directly seen, and one cannot prove the transcendent exists. It is always beyond our reach.

Theologians refer to transcendence as one of the two natures of God, the other being “immanence.” Transcendence refers to the higher nature of God and immanence refers to God as He currently works in reality, i.e., the cosmic order. The division between those who believe in a personal God and those who believe in an impersonal God reflects the division between the transcendent and immanent view of God. It is no surprise that most scientists who believe in God tend more to the view of an impersonal God, because their whole life is dedicated to examining the reality of the cosmic order, which seems to operate according to a set of rules rather than personal supervision.

Of course, atheists don’t even believe in an impersonal God. One famous atheist, Sigmund Freud, argued that religion was an illusion, a simple exercise in “wish fulfillment.” According to Freud, human beings desired love, immortality, and an end to suffering and pain, so they gravitated to religion as a solution to the inevitable problems and limitations of mortal life. Marxists have a similar view of religion, seeing promises of an afterlife as a barrier to improving actual human life.

Another view was taken by the American philosopher George Santayana, whose book, Reason in Religion, is one of the very finest books ever written on the subject of religion. According to Santayana, religion was an imaginative and poetic interpretation of life; religion supplied ideal ends to which human beings could orient their lives. Religion failed only when it attributed literal truth to these imaginative ideal ends. Thus religions should be judged, according to Santayana, according to whether they were good or bad, not whether they were true or false.

This criteria for judging religion would appear to be irrational, both to rationalists and to those who cling to faith. People tend to equate worship of God with belief in God, and often see literalists and fundamentalists as the most devoted of all. But I would argue that worship is the act of submission to ideal ends, which hold value precisely because they are higher than actually existing things, and therefore cannot pass traditional tests of truth, which call for a correspondence to reality.

In essence, worship is submission to a transcendent Good. We see good in our lives all the time, but we know that the particular goods we experience are partial and perishable. Freud is right that we wish for goods that cannot be acquired completely in our lives and that we use our imaginations to project perfect and eternal goods, i.e. God and heaven. But isn’t it precisely these ideal ends that are sacred, not the flawed, perishable things that we see all around us? In the words of Santayana,

[I]n close association with superstition and fable we find piety and spirituality entering the world. Rational religion has these two phases: piety, or loyalty to necessary conditions, and spirituality, or devotion to ideal ends. These simple sanctities make the core of all the others. Piety drinks at the deep, elemental sources of power and order: it studies nature, honours the past, appropriates and continues its mission. Spirituality uses the strength thus acquired, remodeling all it receives, and looking to the future and the ideal. (Reason in Religion, Chapter XV)

People misunderstand ancient Greek religion when they think it is merely a set of stories about invisible personalities who fly around controlling nature and intervening in human affairs. Many Greek myths were understood to be poetic creations, not history; there were often multiple variations of each myth, and people felt free to modify the stories over time, create new gods and goddesses, and change the functions/responsibilities of each god. Rational consistency was not expected, and depictions of the appearance of any god or goddess in statues or painting could vary widely. For the Greeks, the gods were not just personalities, but transcendent forms of the Good. This is why Greek religion also worshipped idealized ends and virtues such as “Peace,” “Victory,” “Love,” “Democracy,” “Health,” “Order,” and “Wealth.” The Greeks represented these idealized ends and virtues as persons (usually females) in statues, built temples for them, and composed worshipful hymns to them. In fact, the tendency of the Greeks to depict any desired end or virtue as a person was so prevalent, it is sometimes difficult for historians to tell if a particular statue or temple was meant for an actual goddess/god or was a personified symbol. For the ancient Greeks, the distinction may not have been that important, for they tended to think in highly poetic and metaphorical terms.

This may be fine as an interpretation of religion, you may say, but does it make sense to conceive of imaginative transcendent forms as persons or spirits who can actually bring about the goods and virtues that we seek? Is there any reason to think that prayer to Athena will make us wise, that singing a hymn to Zeus will help us win a war, or that a sacrifice at the temples of “Peace” or “Health” will bring us peace or health? If these gods are not powerful persons or spirits that can hear our prayers or observe our sacrifices, but merely poetic representations or symbols, then what good are they and what good is worship?

My view is this: worship and prayer do not affect natural causation. Storms, earthquakes, disease, and all the other calamities that have afflicted humankind from the beginning are not affected by prayer. Addressing these calamities requires research into natural causation, planning, human intervention, and technology. What worship and prayer can do, if they are directed at the proper ends, is help us transcend ourselves, make ourselves better people, and thereby make our societies better.

In a previous essay, I reviewed the works of various physicists, who concluded that reality consists not of tiny, solid objects but rather bundles of properties and qualities that emerge from potentiality to actuality. I think this dynamic view of reality is what we need in order to understand the relationship between the transcendent and the actual. We worship the transcendent not because we can prove it exists, but because the transcendent is always drawing us to a higher life, one that excels or supersedes who we already are. The pantheism of Spinoza and Einstein is more rational than traditional myths that attributed natural events to a personal God who created the world in six days and subsequently punished evil by causing natural disasters. But pantheism is ultimately a poor basis for religion. What would be the point of worshipping the law of gravity or electromagnetism or the elements in the periodic table? These foundational parts of the universe are impressive, but I would argue that aspiring to something higher is fundamental not only to human nature but to the universe itself. The universe, after all, began simply with a concentrated point of energy; then space expanded and a few elements such as hydrogen and helium formed; only after hundreds of millions of years did the first stars, planets, and other elements necessary for life began to emerge.

Worshipping the transcendent orients the self to a higher good, out of the immediate here-and-now. And done properly, worship results in worthy accomplishments that improve life. We tend to think of human civilization as being based on the rational mastery of a body of knowledge. But all knowledge began with an imagined transcendent good. The very first lawgivers had no body of laws to study; the first ethicists had no texts on morals to consult; the first architects had no previous designs to emulate; the first mathematicians had no symbols to calculate with; the first musicians had no composers to study. All our knowledge and civilization began with an imagined transcendent good. This inspired experimentation with primitive forms; and then improvement on those initial primitive efforts. Only much later, after many centuries, did the fields of law, ethics, architecture, mathematics, and music become a body of knowledge requiring years of study. So we attribute these accomplishments to reason, forgetting the imaginative leaps that first spurred these fields.

 

Zen and the Art of Science: A Tribute to Robert Pirsig

Author Robert Pirsig, widely acclaimed for his bestselling books, Zen and the Art of Motorcycle Maintenance (1974) and Lila (1991), passed away in his home on April 24, 2017. A well-rounded intellectual equally at home in the sciences and the humanities, Pirsig made the case that scientific inquiry, art, and religious experience were all particular forms of knowledge arising out of a broader form of knowledge about the Good or what Pirsig called “Quality.” Yet, although Pirsig’s books were bestsellers, contemporary debates about science and religion are oddly neglectful of Pirsig’s work. So what did Pirsig claim about the common roots of human knowledge, and how do his arguments provide a basis for reconciling science and religion?

Pirsig gradually developed his philosophy as response to a crisis in the foundations of scientific knowledge, a crisis he first encountered while he was pursuing studies in biochemistry. The popular consensus at the time was that scientific methods promised objectivity and certainty in human knowledge. One developed hypotheses, conducted observations and experiments, and came to a conclusion based on objective data. That was how scientific knowledge accumulated.

However, Pirsig noted that, contrary to his own expectations, the number of hypotheses could easily grow faster than experiments could test them. One could not just come up with hypotheses – one had to make good hypotheses, ones that could eliminate the need for endless and unnecessary observations and testing. Good hypotheses required mental inspiration and intuition, components that were mysterious and unpredictable.  The greatest scientists were precisely like the greatest artists, capable of making immense creative leaps before the process of testing even began.  Without those creative leaps, science would remain on a never-ending treadmill of hypothesis development – this was the “infinity of hypotheses” problem.  And yet, the notion that science depended on intuition and artistic leaps ran counter to the established view that the scientific method required nothing more than reason and the observation and recording of an objective reality.

Consider Einstein. One of history’s greatest scientists, Einstein hardly ever conducted actual experiments. Rather, he frequently engaged in “thought experiments,” imagining what it would be like to chase a beam of light, what it would feel like to be in a falling elevator, and what a clock would look like if the streetcar he was riding raced away from the clock at the speed of light.

One of the most fruitful sources of hypotheses in science is mathematics, a discipline which consists of the creation of symbolic models of quantitative relationships. And yet, the nature of mathematical discovery is so mysterious that mathematicians themselves have compared their insights to mysticism. The great French mathematician Henri Poincare believed that the human mind worked subliminally on problems, and his work habit was to spend no more than two hours at a time working on mathematics. Poincare believed that his subconscious would continue working on problems while he conducted other activities, and indeed, many of his great discoveries occurred precisely when he was away from his desk. John von Neumann, one of the best mathematicians of the twentieth century, also believed in the subliminal mind. He would sometimes go to sleep with a mathematical problem on his mind and wake up in the middle of the night with a solution. The Indian mathematical genius Srinivasa Ramanujan was a Hindu mystic who believed that solutions were revealed to him in dreams by the goddess Namagiri.

Intuition and inspiration were human solutions to the infinity-of-hypotheses problem. But Pirsig noted there was a related problem that had to be solved — the infinity of facts.  Science depended on observation, but the issue of which facts to observe was neither obvious nor purely objective.  Scientists had to make value judgments as to which facts were worth close observation and which facts could be safely overlooked, at least for the moment.  This process often depended heavily on an imprecise sense or feeling, and sometimes mere accident brought certain facts to scientists’ attention. What values guided the search for facts? Pirsig cited Poincare’s work The Foundations of Science. According to Poincare, general facts were more important than particular facts, because one could explain more by focusing on the general than the specific. Desire for simplicity was next – by beginning with simple facts, one could begin the process of accumulating knowledge about nature without getting bogged down in complexity at the outset. Finally, interesting facts that provided new findings were more important than facts that were unimportant or trivial. The point was not to gather as many facts as possible but to condense as much experience as possible into a small volume of interesting findings.

Research on the human brain supports the idea that the ability to value is essential to the discernment of facts.  Professor of Neuroscience Antonio Damasio, in his book Descartes’ Error: Emotion, Reason, and the Human Brain, describes several cases of human beings who lost the part of their brain responsible for emotions, either because of an accident or a brain tumor.  These persons, some of whom were previously known as shrewd and smart businessmen, experienced a serious decline in their competency after damage took place to the emotional center of their brains.  They lost their capacity to make good decisions, to get along with other people, to manage their time, or to plan for the future.  In every other respect, these persons retained their cognitive abilities — their IQs remained above normal and their personality tests resulted in normal scores.  The only thing missing was their capacity to have emotions.  Yet this made a huge difference.  Damasio writes of one subject, “Elliot”:

Consider the beginning of his day: He needed prompting to get started in the morning and prepare to go to work.  Once at work he was unable to manage his time properly; he could not be trusted with a schedule.  When the job called for interrupting an activity and turning to another, he might persist nonetheless, seemingly losing sight of his main goal.  Or he might interrupt the activity he had engaged, to turn to something he found more captivating at that particular moment.  Imagine a task involving reading and classifying documents of a given client.  Elliot would read and fully understand the significance of the material, and he certainly knew how to sort out the documents according to the similarity or disparity of their content.  The problem was that he was likely, all of a sudden, to turn from the sorting task he had initiated to reading one of those papers, carefully and intelligently, and to spend an entire day doing so.  Or he might spend a whole afternoon deliberating on which principle of categorization should be applied: Should it be date, size of document, pertinence to the case, or another?   The flow of work was stopped. (p. 36)

Why did the loss of emotion, which might be expected to improve decision-making by making these persons coldly objective, result in poor decision-making instead?  According to Damasio, without emotions, these persons were unable to value, and without value, decision-making in the face of infinite facts became hopelessly capricious or paralyzed, even with normal or above-normal IQs.  Damasio noted, “the cold-bloodedness of Elliot’s reasoning prevented him from assigning different values to different options, and made his decision-making landscape hopelessly flat.” (p. 51) Damasio discusses several other similar case studies.

So how would it affect scientific progress if all scientists were like the subjects Damasio studied, free of emotion, and therefore, hypothetically capable of perfect objectivity?  Well it seems likely that science would advance very slowly, at best, or perhaps not at all.  After all, the same tools for effective decision-making in everyday life are needed for the scientific enterprise as well. A value-free scientist would not only be unable to sustain the social interaction that science requires, he or she would be unable to develop a research plan, manage his or her time, or stick to a research plan.

_________

Where Pirsig’s philosophy becomes particularly controversial and difficult to understand is in his approach to the truth. The dominant view of truth today is known as the “correspondence” theory of truth – that is, any human statement that is true must correspond precisely to something objectively real. In this view, the laws of physics and chemistry are real because they correspond to actual events that can be observed and demonstrated. Pirsig argues on the contrary that in order to understand reality, human beings must invent symbolic and conceptual models, that there is a large creative component to these models (it is not just a matter of pure correspondence to reality), and that multiple such models can explain the same reality even if they are based on wholly different principles. Math, logic, and even the laws of physics are not “out there” waiting to be discovered – they exist in the mind, which doesn’t mean that these things are bad or wrong or unreal.

There are several reasons why our symbolic and conceptual models don’t correspond literally to reality, according to Pirsig. First, there is always going to be a gap between reality and the concepts we use to describe reality, because reality is continuous and flowing, while concepts are discrete and static. The creation of concepts necessarily calls for cutting reality into pieces, but there is no one right way to divide reality, and something is always lost when this is done. In fact, Pirsig noted, our very notions of subjectivity and objectivity, the former allegedly representing personal whims and the latter representing truth, rested upon an artificial division of reality into subjects and objects; in fact, there were other ways of dividing reality that could be just as legitimate or useful. In addition, concepts are necessarily static – they can’t be always changing or we would not be able to make sense of them. Reality, however, is always changing. Finally, describing reality is not always a matter of using direct and literal language but may require analogy and imaginative figures of speech.

Because of these difficulties in expressing reality directly, a variety of symbolic and conceptual models, based on widely varying principles, are not only possible but necessary – necessary for science as well as other forms of knowledge. Pirsig points to the example of the crisis that occurred in mathematics in the nineteenth century. For many centuries, it was widely believed that geometry, as developed by the ancient Greek mathematician Euclid, was the most exact of all of the sciences.  Based on a small number of axioms from which one could deduce multiple propositions, Euclidean geometry represented a nearly perfect system of logic.  However, while most of Euclid’s axioms were seemingly indisputable, mathematicians had long experienced great difficulty in satisfactorily demonstrating the truth of one of the chief axioms on which Euclidean geometry was based. This slight uncertainty led to an even greater crisis of uncertainty when mathematicians discovered that they could reverse or negate this axiom and create alternative systems of geometry that were every bit as logical and valid as Euclidean geometry.  The science of geometry was gradually replaced by the study of multiple geometries. Pirsig cited Poincare, who pointed out that the principles of geometry were not eternal truths but definitions and that the test of a system of geometry was not whether it was true but how useful it was.

So how do we judge the usefulness or goodness of our symbolic and conceptual models? Traditionally, we have been told that pure objectivity is the only solution to the chaos of relativism, in which nothing is absolutely true. But Pirsig pointed out that this hasn’t really been how science has worked. Rather, models are constructed according to the often competing values of simplicity and generalizability, as well as accuracy. Theories aren’t just about matching concepts to facts; scientists are guided by a sense of the Good (Quality) to encapsulate as much of the most important knowledge as possible into a small package. But because there is no one right way to do this, rather than converging to one true symbolic and conceptual model, science has instead developed a multiplicity of models. This has not been a problem for science, because if a particular model is useful for addressing a particular problem, that is considered good enough.

The crisis in the foundations of mathematics created by the discovery of non-Euclidean geometries and other factors (such as the paradoxes inherent in set theory) has never really been resolved. Mathematics is no longer the source of absolute and certain truth, and in fact, it never really was. That doesn’t mean that mathematics isn’t useful – it certainly is enormously useful and helps us make true statements about the world. It’s just that there’s no single perfect and true system of mathematics. (On the crisis in the foundations of mathematics, see the papers here and here.) Mathematical axioms, once believed to be certain truths and the foundation of all proofs, are now considered definitions, assumptions, or hypotheses. And a substantial number of mathematicians now declare outright that mathematical objects are imaginary, that particular mathematical formulas may be used to model real events and relationships, but that mathematics itself has no existence outside the human mind. (See The Mathematical Experience by Philip J. Davis and Reuben Hersh.)

Even some basic rules of logic accepted for thousands of years have come under challenge in the past hundred years, not because they are absolutely wrong, but because they are inadequate in many cases, and a different set of rules is needed. The Law of the Excluded Middle states that any proposition must be either true or false (“P” or “not P” in symbolic logic). But ever since mathematicians discovered propositions which are possibly true but not provable, a third category of “possible/unknown” has been added. Other systems of logic have been invented that use the idea of multiple degrees of truth, or even an infinite continuum of truth, from absolutely false to absolutely true.

The notion that we need multiple symbolic and conceptual models to understand reality remains controversial to many. It smacks of relativism, they argue, in which every person’s opinion is as valid as another person’s. But historically, the use of multiple perspectives hasn’t resulted in the abandonment of intellectual standards among mathematicians and scientists. One still needs many years of education and an advanced degree to obtain a job as a mathematician or scientist, and there is a clear hierarchy among practitioners, with the very best mathematicians and scientists working at the most prestigious universities and winning the highest awards. That is because there are still standards for what is good mathematics and science, and scholars are rewarded for solving problems and advancing knowledge. The fact that no one has agreed on what is the One True system of mathematics or logic isn’t relevant. In fact, physicist Stephen Hawking has argued:

[O]ur brains interpret the input from our sensory organs by making a model of the world. When such a model is successful at explaining events, we tend to attribute to it, and to the elements and concepts that constitute it, the quality of reality or absolute truth. But there may be different ways in which one could model the same physical situation, with each employing different fundamental elements and concepts. If two such physical theories or models accurately predict the same events, one cannot be said to be more real than the other; rather we are free to use whichever model is more convenient (The Grand Design, p. 7).

Among the most controversial and mind-bending claims Pirsig makes is that the very laws of nature themselves exist only in the human mind. “Laws of nature are human inventions, like ghosts,” he writes. Pirsig even remarks that it makes no sense to think of the law of gravity existing before the universe, that it only came into existence when Isaac Newton thought of it. It’s an outrageous claim, but if one looks closely at what the laws of nature actually are, it’s not so crazy an argument as it first appears.

For all of the advances that science has made over the centuries, there remains a sharp division of views among philosophers and scientists on one very important issue: are the laws of nature actual causal powers responsible for the origins and continuance of the universe or are the laws of nature summary descriptions of causal patterns in nature? The distinction is an important one. In the former view, the laws of physics are pre-existing or eternal and possess god-like powers to create and shape the universe; in the latter view, the laws have no independent existence – we are simply finding causal patterns and regularities in nature that allow us to predict and we call these patterns “laws.”

One powerful argument in favor of the latter view is that most of the so-called “laws of nature,” contrary to the popular view, actually have exceptions – and sometimes the exceptions are large. That is because the laws are simplified models of real phenomena. The laws were cobbled together by scientists in order to strike a careful balance between the values of scope, predictive accuracy, and simplicity. Michael Scriven, a mathematician and philosopher at Claremont Graduate University, has noted that as a result of this balance of values, physical laws are actually approximations that apply only within a certain range. This point has also been made more recently by Ronald Giere, a professor of philosophy at the University of Minnesota, in Science Without Laws and Nancy Cartwright of the University of California at San Diego in How the Laws of Physics Lie.

Newton’s law of universal gravitation, for example, is not really universal. It becomes increasingly inaccurate under conditions of high gravity and very high velocities, and at the atomic level, gravity is completely swamped by other forces. Whether one uses Newton’s law depends on the specific conditions and the level of accuracy one requires. Newton’s laws of motion also have exceptions, depending on the force, distance, and speed. Kepler’s laws of planetary motion are an approximation based on the simplifying assumption of a planetary system consisting of one planet. The ideal gas law is an approximation which becomes inaccurate under conditions of low temperature and/or high pressure. The law of multiple proportions works for simple molecular compounds, but often fails for complex molecular compounds. Biologists have discovered so many exceptions to Mendel’s laws of genetics that some believe that Mendel’s laws should not even be considered laws.

So if we think of laws of nature as being pre-existing, eternal commandments, with god-like powers to shape the universe, how do we account for these exceptions to the laws? The standard response by scientists is that their laws are simplified depictions of the real laws. But if that is the case, why not state the “real” laws? Because by the time we wrote down the real laws, accounting for every possible exception, we would have an extremely lengthy and detailed description of causation that would not recognizably be a law. The whole point of the laws of nature was to develop tools by which one could predict a large number of phenomena (scope), maintain a good-enough correspondence to reality (accuracy), and make it possible to calculate predictions without spending an inordinate amount of time and effort (simplicity). That is why although Einstein’s conception of gravity and his “field equations” have supplanted Newton’s law of gravitation, physicists still use Newton’s “law” in most cases because it is simpler and easier to use; they only resort to Einstein’s complex equations when they have to! The laws of nature are human tools for understanding, not mathematical gods that shape the universe. The actual practice of science confirms Pirsig’s point that the symbolic and conceptual models that we create to understand reality have to be judged by how good they are – simple correspondence to reality is insufficient and in many cases is not even possible anyway.

_____________

 

Ultimately, Pirsig concluded, the scientific enterprise is not that different from the pursuit of other forms of knowledge – it is based on a search for the Good. Occasionally, you see this acknowledged explicitly, when mathematicians discuss the beauty of certain mathematical proofs or results, as defined by their originality, simplicity, ability to solve many problems at once, or their surprising nature. Scientists also sometimes write about the importance of elegance in their theories, defined as the ability to explain as much as possible, as clearly as possible, and as simply as possible. Depending on the field of study, the standards of judgment may be different, the tools may be different, and the scope of inquiry is different. But all forms of human knowledge — art, rhetoric, science, reason, and religion — originate in, and are dependent upon, a response to the Good or Quality. The difference between science and religion is that scientific models are more narrowly restricted to understanding how to predict and manipulate natural phenomena, whereas religious models address larger questions of meaning and value.

Pirsig did not ignore or suppress the failures of religious knowledge with regard to factual claims about nature and history. The traditional myths of creation and the stories of various prophets were contrary to what we know now about physics, biology, paleontology, and history. In addition, Pirsig was by no means a conventional theist — he apparently did not believe that God was a personal being who possessed the attributes of omniscience and omnipotence, controlling or potentially controlling everything in the universe.

However, Pirsig did believe that God was synonymous with the Good, or “Quality,” and was the source of all things.  In fact, Pirsig wrote that his concept of Quality was similar to the “Tao” (the “Way” or the “Path”) in the Chinese religion of Taoism. As such, Quality was the source of being and the center of existence. It was also an active, dynamic power, capable of bringing about higher and higher levels of being. The evolution of the universe, from simple physical forms, to complex chemical compounds, to biological organisms, to societies was Dynamic Quality in action. The most recent stage of evolution – Intellectual Quality – refers to the symbolic models that human beings create to understand the universe. They exist in the mind, but are a part of reality all the same – they represent a continuation of the growth of Quality.

What many religions were missing, in Pirsig’s view, was not objectivity, but dynamism: an ability to correct old errors and achieve new insights. The advantage of science was its willingness and ability to change. According to Pirsig,

If scientists had simply said Copernicus was right and Ptolemy was wrong without any willingness to further investigate the subject, then science would have simply become another minor religious creed. But scientific truth has always contained an overwhelming difference from theological truth: it is provisional. Science always contains an eraser, a mechanism whereby new Dynamic insight could wipe out old static patterns without destroying science itself. Thus science, unlike orthodox theology, has been capable of continuous, evolutionary growth. (Lila, p. 222)

The notion that religion and orthodoxy go together is widespread among believers and secularists. But there is no necessary connection between the two. All religions originate in social processes of story-telling, dialogue, and selective borrowing from other cultures. In fact, many religions begin as dangerous heresies before they become firmly established — orthodoxies come later. The problem with most contemporary understandings of religion is that one’s adherence to religion is often measured by one’s commitment to orthodoxy and membership in religious institutions rather than an honest quest for what is really good.  A person who insists on the literal truth of the Bible and goes to church more than once a week is perceived as being highly religious, whereas a person not connected with a church but who nevertheless seeks religious knowledge wherever he or she can find it is considered less committed or even secular.  This prejudice has led many young people to identify as “spiritual, not religious,” but religious knowledge is not inherently about unwavering loyalty to an institution or a text. Pirsig believed that mysticism was a necessary component of religious knowledge and a means of disrupting orthodoxies and recovering the dynamic aspect of religious insight.

There is no denying that the most prominent disputes between science and religion in the last several centuries regarding the physical workings of the universe have resulted in a clear triumph for scientific knowledge over religious knowledge.  But the solution to false religious beliefs is not to discard religious knowledge — religious knowledge still offers profound insights beyond the scope of science. That is why it is necessary to recover the dynamic nature of religious knowledge through mysticism, correction of old beliefs, and reform. As Pirsig argued, “Good is a noun.” Not because Good is a thing or an object, but because Good  is the center and foundation of all reality and all forms of knowledge, whether we are consciously aware of it or not.

The Use of Fiction and Falsehood in Science

Astrophysicist Neil deGrasse Tyson has some interesting and provocative things to say about religion in a recent interview. I tend to agree with Tyson that religions have a number of odd or even absurd beliefs that are contrary to science and reason. One statement by Tyson, however, struck me as inaccurate. According to Tyson, “[T]here are religions and belief systems, and objective truths. And if we’re going to govern a country, we need to base that governance on objective truths — not your personal belief system.” (The Daily Beast)

I have a great deal of respect for Tyson as a scientist, and Tyson clearly knows more about physics than I do. But I think his understanding of what scientific knowledge provides is naïve and unsupported by history and present day practice. The fact of the matter is that scientists also have belief systems, “mental models” of how the world works. These mental models are often excellent at making predictions, and may also be good for explanation. But the mental models of science may not be “objectively true” in representing reality.

The best mental models in science satisfy several criteria: they reliably predict natural phenomena; they cover a wide range of such phenomena (i.e., they cover much more than a handful of special cases); and they are relatively simple. Now it is not easy to create a mental model that satisfies these criteria, especially because there are tradeoffs between the different criteria. As a result, even the best scientists struggle for many years to create adequate models. But as descriptions of reality, the models, or components of the models, may be fictional or even false. Moreover, although we think that the models we have today are true, every good scientist knows that in the future our current models may be completely overturned by new models based on entirely new conceptions. Yet in many cases, scientists often respect or retain the older models because they are useful, even if the models’ match to reality is false!

Consider the differences between Isaac Newton’s conception of gravity and Albert Einstein’s conception of gravity. According to Newton, gravity is a force that attracts objects to each other. If you throw a ball on earth, the path of the ball eventually curves downward because of the gravitational attraction of the earth. In Newton’s view, planets orbit the sun because the force of gravity pulls planetary bodies away from the straight line paths that they would normally follow as a result of inertia: hence, planets move in circular orbits. But according to Einstein, gravity is not a force — gravity seems like it’s a force, but it’s actually a “fictitious force.” In Einstein’s view, objects seem to attract each other because mass warps or curves spacetime, and objects tend to follow the paths made by curved spacetime. Newton and Einstein agree that inertia causes objects in motion to continue in straight lines unless they are acted on by a force; but in Einstein’s view, planets orbit the sun because they are actually already travelling straight paths, only in curved spacetime! (Yes this makes sense — if you travel in a jet, your straightest possible path between two cities is actually curved, because the earth is round.)

Scientists agree that Einstein’s view of gravity is correct (for now). But they also continue to use Newtonian models all the time. Why? Because Newtonian models are much simpler than Einstein’s and scientists don’t want to work harder than they have to! Using Newtonian conceptions of gravity as a real force, scientists can still track the paths of objects and send satellites into orbit; Newton’s equations work perfectly fine as predictive models in most cases. It is only in extraordinary cases of very high gravity or very high speeds that scientists must abandon Newtonian models and use Einstein’s to get more accurate predictions. Otherwise scientists much prefer to assume gravity is a real force and use Newtonian models. Other fictitious forces that scientists calculate using Newton’s models are the Coriolis force and centrifugal force.

Even in cases where you might expect scientists to use Einstein’s conception of curved spacetime, there is not a consistent practice. Sometimes scientists assume that spacetime is curved, sometimes they assume spacetime is flat. According to theoretical physicist Kip Thorne, “It is extremely useful, in relativity research, to have both paradigms at one’s fingertips. Some problems are solved most easily and quickly using the curved spacetime paradigm; others, using flat spacetime. Black hole problems . . . are most amenable to curved spacetime techniques; gravitational-wave problems . . . are most amenable to flat spacetime techniques.” (Black Holes and Time Warps). Whatever method provides the best results is what matters, not so much whether spacetime is really curved or not.

The question of the reality of mental models in science is particularly acute with regard to mathematical models. For many years, mathematicians have been debating whether or not the objects of mathematics are real, and they have yet to arrive at a consensus. So, if an equation accurately predicts how natural phenomena behave, is it because the equation exists “out there” someplace? Or is it because the equation is just a really good mental model? Einstein himself argued that “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” By this, Einstein meant that it was possible to create perfectly certain mathematical models in the human mind; but that the matching of these models’ predictions to natural phenomenon required repeated observation and testing, and one could never be completely sure that one’s model was the final answer and therefore that it really objectively existed.

And even if mathematical models work perfectly in predicting the behavior of natural phenomena, there remains the question of whether the different components of the model really match to something in reality. As noted above, Newton’s model of gravity does a pretty good job of predicting motion — but the part of the model that describes gravity as a force is simply wrong. In mathematics, the set of numbers known as “imaginary numbers” are used by engineers for calculating electric current; they are used by 3D modelers; and they are used by physicists in quantum mechanics, among other applications. But that doesn’t necessarily mean that imaginary numbers exist or correspond to some real quantity — they are just useful components of an equation.

A great many scientists are quite upfront about the fact that their models may not be an accurate reflection of reality. In their view, the purpose of science is to predict the behavior of natural phenomena, and as long as science gets better and better at this, it is less important if models are proved to be a mismatch to reality. Brian Koberlein, an astrophysicist at the Rochester Institute of Technology, writes that scientific theories should be judged by the quality and quantity of their predictions, and that theories capable of making predictions can’t be proved wrong, only replaced by theories that are better at predicting. For example, he notes that the caloric theory of heat, which posited the existence of an invisible fluid within materials, was quite successful in predicting the behavior of heat in objects, and still is at present. Today, we don’t believe such a fluid exists, but we didn’t discard the theory until we came up with a new theory that could predict better. The caloric theory of heat wasn’t “proven wrong,” just replaced with something better. Koberlein also points to Newton’s conception of gravity, which is still used today because it is simpler than Einstein’s and “good enough” at predicting in most cases. Koberlein concludes that for these reasons, Einstein will “never” be wrong — we just may find a theory better at predicting.

Stephen Hawking has discussed the problem of truly knowing reality, and notes that it perfectly possible to have different theories with entirely different conceptual frameworks that work equally well at predicting the same phenomena. In a fanciful example, Hawking notes that goldfish living in a curved bowl will see straight-line movement outside the bowl as being curved, but despite this it would still be possible for goldfish to develop good predictive theories. He notes that likewise, human beings may also have a distorted picture of reality, but we are still capable of building good predictive models. Hawking calls his philosophy “model-dependent realism”:

According to model-dependent realism, it is pointless to ask whether a model is real, only whether it agrees with observation. If there are two models that both agree with observation, like the goldfish’s model and ours, then one cannot say that one is more real than the other. One can use whichever model is more convenient in the situation under consideration. (The Grand Design, p. 46)

So if science consists of belief systems/mental models, which may contain fictions or falsehoods, how exactly does science differ from religion?

Well for one thing, science far excels religion in providing good predictive models. If you want to know how the universe began, how life evolved on earth, how to launch a satellite into orbit, or how to build a computer, religious texts offer virtually nothing that can help you with these tasks. Neil deGrasse Tyson is absolutely correct about the failure of religion in this respect.  Traditional stories of the earth’s creations, as found in the Bible’s book of Genesis, were useful first attempts to understand our origins, but they have been long-eclipsed by contemporary scientific models, and there is no use denying this.

What religion does offer, and science does not, is a transcendent picture of how we ought to live our lives and an interpretation of life’s meaning according to this transcendent picture. The behavior of natural phenomena can be predicted to some extent by science, but human beings are free-willed. We can decide to love others or love ourselves above others. We can seek peace, or murder in the pursuit of power and profit. Whatever we decide to do, science can assist us in our actions, but it can’t provide guidance on what we ought to do. Religion provides that vision, and if these visions are imaginative, so are many aspects of scientific models. Einstein himself, while insisting that science was the pursuit of objective knowledge, also saw a role for religion in providing a transcendent vision:

[T]he scientific method can teach us nothing else beyond how facts are related to, and conditioned by, each other.The aspiration toward such objective knowledge belongs to the highest of which man is capabIe, and you will certainly not suspect me of wishing to belittle the achievements and the heroic efforts of man in this sphere. Yet it is equally clear that knowledge of what is does not open the door directly to what should be. . . . Objective knowledge provides us with powerful instruments for the achievements of certain ends, but the ultimate goal itself and the longing to reach it must come from another source. . . .

To make clear these fundamental ends and valuations, and to set them fast in the emotional life of the individual, seems to me precisely the most important function which religion has to perform in the social life of man.

Now fundamentalists and atheists might both agree that rejecting the truth of sacred scripture with regard to the big bang and evolution tends to undermine the transcendent visions of religion. But the fact of the matter is that scientists never reject a mental model simply because parts of the model may be fictional or false; if the model provides useful guidance, it is still a valid part of human knowledge.

Scientific Revolutions and Relativism

Recently, Facebook CEO Mark Zuckerberg chose Thomas Kuhn’s classic The Structure of Scientific Revolutions for his book discussion group. And although I don’t usually try to update this blog with the most recent controversy of the day, this time I can’t resist jumping on the Internet bandwagon and delving into this difficult, challenging book.

To briefly summarize, Kuhn disputes the traditional notion of science as one of cumulative growth, in which Galileo and Kepler build upon Copernicus, Newton builds upon Galileo and Kepler, and Einstein builds upon Newton. This picture of cumulative growth may be accurate for periods of “normal science,” Kuhn writes, when the community of scientists are working from the same general picture of the universe. But there are periods when the common picture of the universe (which Kuhn refers to as a “paradigm”) undergoes a revolutionary change. A radically new picture of the universe emerges in the community of scientists, old words and concepts obtain new meanings, and scientific consensus is challenged by conflict between traditionalists and adherents of the new paradigm. If the new paradigm is generally successful in solving new puzzles AND solving older puzzles that the previous paradigm solved, the community of scientists gradually moves to accept the new paradigm — though this often requires that stubborn traditionalists eventually die off.

According to Kuhn, science as a whole progressed cumulatively in the sense that science became better and better at solving puzzles and predicting things, such as the motions of the planets and stars. But the notion that scientific progress was bringing us closer and closer to the Truth, was in Kuhn’s view highly problematic. He felt there was no theory-independent way of saying what was really “out there” — conceptions of reality were inextricably linked to the human mind and its methods of perceiving, selecting, and organizing information. Rather than seeing science as evolving closer and closer to an ultimate goal, Kuhn made an analogy to biological evolution, noting that life evolves into higher forms, but there is no evidence of a final goal toward which life is heading. According to Kuhn,

I do not doubt, for example, that Newton’s mechanics improves on Aristotle’s and that Einstein’s improves on Newton’s as instruments for puzzle-solving. But I can see in their succession no coherent direction of ontological development. On the contrary, in some important respects, though by no means all, Einstein’s general theory of relativity is closer to Aristotle’s than either of them is to Newton’s. (Structure of Scientific Revolutions, postscript, pp. 206-7.)

This claim has bothered many. In the view of Kuhn’s critics, if a theory solves more puzzles, predicts more phenomena to a greater degree of accuracy, the theory must be a more accurate picture of reality, bringing us closer and closer to the Truth. This is a “common sense” conclusion that would seem to be irrefutable. One writer in Scientific American comments on Kuhn’s appeal to “relativists,” and argues:

Kuhn’s insight forced him to take the untenable position that because all scientific theories fall short of absolute, mystical truth, they are all equally untrue. Because we cannot discover The Answer, we cannot find any answers. His mysticism led him to a position as absurd as that of the literary sophists who argue that all texts — from The Tempest to an ad for a new brand of vodka — are equally meaningless, or meaningful. (“What Thomas Kuhn Really Thought About Scientific ‘Truth’“)

Many others have also charged Kuhn with relativism, so it is important to take some time to examine this charge.

What people seem to have a hard time grasping is what scientific theories actually accomplish. Scientific theories or models can in fact be very good at solving puzzles or predicting outcomes without being an accurate reflection of reality — in fact, in many cases theories have to be unrealistic in order to be useful! Why? A theory must accomplish several goals, but some of these goals are incompatible, requiring a tradeoff of values. For example, the best theories generalize as much as possible, but since there are exceptions to almost every generalization, there is a tradeoff between generalizability and accuracy. As Nancy Cartwright and Ronald Giere have pointed out, the “laws of physics” have many exceptions when matched to actual phenomena; but we cherish the laws of physics because of their wide scope: they subsume millions of observations under a small number of general principles, even though specific cases usually don’t exactly match the predictions of any one law.

There is also a tradeoff between accuracy and simplicity. Complete accuracy in many cases may require dozens of complex calculations; but most of the time, complete accuracy is not required, so scientists go with the simplest possible principles and calculations. For example, when dealing with gravity, Newton’s theory is much simpler than Einstein’s, so scientists use Newton’s equations until circumstances require them to use Einstein’s equations. (For more on theoretical flexibility, see this post.)

Finally, there is a tradeoff between explanation and prediction. Many people assume that explanation and prediction are two sides of the same coin, but in fact it is not only possible to predict outcomes without having a good causal model, sometimes focusing on causation gets in the way of developing a good predictive model. Why? Sometimes it’s difficult to observe or measure causal variables, so you build your model using variables that are observable and measurable even if those variables are merely associated with certain outcomes and may not cause those outcomes. To choose a very simple example, a model that posits that a rooster crowing leads to the rising of the sun can be a very good predictive model while saying nothing about causation. And there are actually many examples of this in contemporary scientific practice. Scientists working for the Netflix corporation on improving the prediction of customers’ movie preferences have built a highly valuable predictive model using associations between certain data points, even though they don’t have a true causal model. (See Galit Shmueli, “To Explain or Predict” in Statistical Science, 2010, vol. 25, no. 3)

Not only is there no single, correct way to make these value tradeoffs, it is often the case that one can end up with multiple, incompatible theories that deal with the same phenomena, and there is no obvious choice as to which theory is best. As Kuhn has pointed out, new theories become widely accepted among the community of scientists only when the new theory can account for anomalies in the old theory AND yet also conserve at least most of the predictions of the old theory. Even so, it is not long before even newer theories come along that also seem to account for the same phenomena equally well. Is it relativism to recognize this fact? Not really. Does the reality of multiple, incompatible theories mean that every person’s opinion is equally valid? No. There are still firm standards in science. But there can be more than one answer to a problem. The square root of 1,000,000 can be 1000 or -1000. That doesn’t mean that any answer to the square root of 1,000,000 is valid!

Physicist Stephen Hawking and philosopher Ronald Giere have made the analogy between scientific theories and maps. A map is an attempt to reduce a very large, approximately spherical, three dimensional object — the earth — to a flat surface. There is no single correct way to make a map, and all maps involve some level of inaccuracy and distortion. If you want accurate distances, the areas of the land masses will be inaccurate, and vice versa. With a small scale, you can depict large areas but lose detail. If you want to depict great detail, you will have to make a map with a larger scale. If you want to depict all geographic features, your map may become so cluttered with detail it is not useful, so you have to choose which details are important — roads, rivers, trees, buildings, elevation, agricultural areas, etc. North can be “up” on your map, but it does not have to be. In fact, it’s possible to make an infinite number of valid maps, as long as they are useful for some purpose. That does not mean that anyone can make a good map, that there are no standards. Making good maps requires knowledge and great skill.

As I noted above, physicists tend to prefer Newton’s theory of gravity rather than Einstein’s to predict the motion of celestial objects because it is simpler. There’s nothing wrong with this, but it is worth pointing out that Einstein’s picture of gravity is completely different from Newton’s. In Newton’s view, space and time are separate, absolute entities, space is flat, and gravity is a force that pulls objects away from the straight lines that the law of inertia would normally make them follow. In Einstein’s view, space and time are combined into one entity, spacetime, space and time are relative, not absolute, spacetime is curved in the presence of mass, and when objects orbit a planet it is not because the force of gravity is overcoming inertia (gravity is in fact a “fictitious force“), but because objects are obeying the law of inertia by following the curved paths of spacetime! In terms of prediction, Einstein’s view of gravity offers an incremental improvement to Newton’s, but Einstein’s picture of gravity is so radically different, Kuhn was right in seeing Einstein’s theory as a revolution. But scientists continue to use Newton’s theory, because it mostly retains the value of prediction while excelling in the value of simplicity.

Stephen Hawking explains why science is not likely to progress to a single, “correct” picture of the universe:

[O]our brains interpret the input from our sensory organs by making a model of the world. When such a model is successful at explaining events, we tend to attribute to it, and the elements and concepts that constitute it, the quality of reality or absolute truth. But there may be different ways in which one could model the same physical situation, with each employing different fundamental elements and concepts. If two such physical theories or models accurately predict the same events, one cannot be said to be more real than the other; rather we are free to use whichever model is more convenient.  (The Grand Design, p. 7)

I don’t think this is “relativism,” but if people insist that it is relativism, it’s not Kuhn who is the guilty party. Kuhn is simply exposing what scientists do.

Objectivity is Not Scientific

It is a common perception that objectivity is a virtue in the pursuit of knowledge, that we need to know things as they really are, independent of our mental conceptions and interpretations.  It is also a common perception that science is the form of knowledge that is the most objective, and that is why scientific knowledge makes the most progress.

Yet the principle of objectivity immediately runs into problems in the most famous scientific theory, Einstein’s theory of relativity.  According to relativity theory, there is no objective way to measure objects in space and time — these measures are always relative to observers depending on what velocity the objects and observers are travelling, and observers often end up with different measures for the same object as a result.  For example, objects travelling at a very high speed will appear to be shorter in length to outside observers that are parallel to the path of the object, a phenomenon known as length contraction.  In addition, time will move more slowly for an observer travelling at high speed than an observer travelling at a low speed.  This phenomenon is illustrated in the “twin paradox” — given a pair of twins, if one sets off in a high speed rocket, while the other stays on earth, the twin on the rocket will have aged more slowly than the twin on earth.  Finally, the sequence of two spatially-separated events, say Event A and Event B, will differ according to the position and velocity of the observer.  Some observers may see Event A occurring before Event B, others may see Event B occurring before Event A, and others will see the two events as simultaneous.  There is no objectively true sequence of events.

The theory of relativity does not say that everything is relative.  The speed of light, for example, is the same for all observers, whether they are moving at a fast speed toward a beam of light or away from a beam of light.  In fact, it was the absolute nature of light speed for all moving observers that led Einstein to conclude that time itself must be different for different observers.  In addition, for any two events that are causally-connected, the events must take place in the same sequence for all observers.  In other words, if Event A causes Event B, Event A must precede Event B for all observers.  So relativity theory sees some phenomena as different for different observers and others as the same for different observers.

Finally, the meaning of relativity in science is not that one person’s opinion is just as valid as anyone else’s.  Observers within the same frame of reference (say, multiple observers travelling together in the same vehicle) should agree on measurements of length and time for an outside object even if observers from other reference frames have different results.  If observers within the same vehicle don’t agree, then something is wrong — perhaps someone is misperceiving, or misinterpreting, or something else is wrong.

Nevertheless, if one accepts the theory of relativity, and this theory has been accepted by scientists for many decades now, one has to accept the fact that there is no objective measure of objects in space and time — it is entirely observer-dependent.  So why do many cling to the notion of objectivity as a principle of knowledge?

Historically, the goal of objectivity was proposed as a way to solve the problem of subjective error.  Individual subjects have imperfect perceptions and interpretations.  What they see and claim is fallible.  The principle of objectivity tries to overcome this problem by proposing that we need to evaluate objects as they are in themselves, in the absence of human mind.  The problem with this principle is that we can’t really step outside of our bodies and minds and evaluate an object.

So how do we overcome the problem of subjective error?  The solution is not to abandon mind, but to supplement it, by communicating with other minds, checking for individual error by seeing if others are getting different results, engaging in dialogue, and attempting to come to a consensus.  Observations and experiments are repeated many times by many different people before conclusions are established.  In this view, knowledge advances by using the combined power of thousands and thousands of minds, past and present.  It is the only way to ameliorate the problem of an incorrect relationship between subject and object and making that relationship better.

In the end, all knowledge, including scientific knowledge, is essentially and unalterably about the relationship between subjects and objects — you cannot find true knowledge by splitting objects from subjects any more than you can split H2O into its individual atoms of hydrogen and oxygen and expect to find water in the component parts.