The Dynamic Quality of Henri Bergson

Robert Pirsig writes in Lila that Quality contains a dynamic good in addition to a static good. This dynamic good consists of a search for “betterness” that is unplanned and has no specific destination, but is nevertheless responsible for all progress. Once a dynamic good solidifies into a concept, practice, or tradition in a culture, it becomes a static good. Creativity, mysticism, dreams, and even good guesses or luck are examples of dynamic good in action. Religious traditions, laws, and science textbooks are examples of static goods.

Pirsig describes dynamic quality as the “pre-intellectual cutting edge of reality.” By this, he means that before concepts, logic, laws, and mathematical formulas are discovered, there is process of searching and grasping that has not yet settled into a pattern or solution. For example, invention and discovery is often not an outcome of calculation or logical deduction, but of a “free association of ideas” that tends to occur when one is not mentally concentrating at all. Many creative people, from writers to mathematicians, have noted that they came up with their best ideas while resting, engaging in everyday activities, or dreaming.

Dynamic quality is not just responsible for human creation — it is fundamental to all evolution, from the physical level of atoms and molecules, to the biological level of life forms, to the social level of human civilization, to the intellectual level of human thought. Dynamic quality exists everywhere, but it has no specific goals or plans — it always consists of spur-of-the-moment actions, decisions, and guesses about how to overcome obstacles to “betterness.”

It is difficult to conceive of dynamic quality — by its very nature, it is resistant to conceptualization and definition, because it has no stable form or structure. If it did have a stable form or structure, it would not be dynamic.

However the French philosopher Henri Bergson (1859-1941) provided a way to think about dynamic quality, by positing change as the fundamental nature of reality. (See Beyond the “Mechanism” Metaphor in Physics.) In Bergson’s view, traditional reason, science, and philosophy created static, eternal forms and posited these forms as the foundation of reality — but in fact these forms were tools for understanding reality and not reality itself. Reality always flowed and was impossible to fully capture in any static conceptual form. This flow could best be understood through perception rather than conception. Unfortunately, as philosophy created larger and larger conceptual categories, philosophy tended to become dominated by empty abstractions such as “substance,” “numbers,” and “ideas.” Bergson proposed that only an intuitive approach that enlarged perceptual knowledge through feeling and imagination could advance philosophy out of the dead end of static abstractions.

________________________

The Flow of Time

Bergson argued that we miss the flow of time when we use the traditional tools of science, mathematics, and philosophy. Science conceives of time as simply one coordinate in a deterministic space-time block ruled by eternal laws; mathematics conceives of time as consisting of equal segments on a graph; and philosophers since Plato have conceptualized the world as consisting of the passing shadows of eternal forms.

These may be useful conceptualizations, argues Bergson, but they do not truly grasp time. Whether it is an eternal law, a graph, or an eternal form, such depictions are snapshots of reality; they do not and cannot represent the indivisible flow of time that we experience. The laws of science in particular neglected the elements of indeterminism and freedom in the universe. (Henri Bergson once debated Einstein on this topic). The neglect of real change by science was the result of science’s ambition to foresee all things, which motivated scientists to focus on the repeatable and calculable elements of nature, rather than the genuinely new. (The Creative Mind, Mineola, New York: Dover, 2007, p. 3) Those events that could not be predicted were tossed aside as being merely random or unknowable. As for philosophy, Bergson complained that the eternal forms of the philosophers were empty abstractions — the categories of beauty and justice and truth were insufficient to serve as representations of real experience.

Actual reality, according to Bergson, consisted of “unceasing creation, the uninterrupted upsurge of novelty.” (The Creative Mind, p. 7) Time was not merely a coordinate for recording motion in a determinist universe; time was “a vehicle of creation and choice.” (p. 75) The reality of change could not be captured in static concepts, but could only be grasped intuitively. While scientists saw evolution as a combination of mechanism and random change, Bergson saw evolution as a result of a vital impulse (élan vital) that pervaded the universe. Although this vital impetus possessed an original unity, individual life forms used this vital impetus for their own ends, creating conflict between life forms. (Creative Evolution, pp. 50-51)

Biologists attacked Bergson on the grounds that there was no “vital impulse” that they could detect and measure. But biologists argued from the reductionist premise that everything could be explained by reference to smaller parts, and since there was no single detectable force animating life, there was no “vital impetus.” But Bergson’s premise was holistic, referring to the broader action of organic development from lower orders to higher orders, culminating in human beings. There was no separate force — rather entities organized, survived, and reproduced by absorbing and processing energy, in multiple forms. In the words of one eminent biologist, organisms are “resilient patterns . . . in an energy flow.” There is no separate or unique energy of life – just energy.

The Superiority of Perception over Conception

Bergson believed with William James that all knowledge originated in perception and feeling; as human mental powers increased, conceptual categories were created to organize and generalize what we (and others) discovered through our senses. Concepts were necessary to advance human knowledge, of course. But over time, abstract concepts came to dominate human thought to the point at which pure ideas were conceived as the ultimate reality — hence Platonism in philosophy, mathematical Platonism in mathematics, and eternal laws in science. Bergson believed that although we needed concepts, we also needed to rediscover the roots of concepts in perception and feeling:

If the senses and the consciousness had an unlimited scope, if in the double direction of matter and mind the faculty of perceiving was indefinite, one would not need to conceive any more than to reason. Conceiving is a make-shift when perception is not granted to us, and reasoning is done in order to fill up the gaps of perception or to extend its scope. I do not deny the utility of abstract and general ideas, — any more than I question the value of bank-notes. But just as the note is only a promise of gold, so a conception has value only through the eventual perceptions it represents. . . . the most ingeniously assembled conceptions and the most learnedly constructed reasonings collapse like a house of cards the moment the fact — a single fact rarely seen — collides with these conceptions and these reasonings. There is not a single metaphysician, moreover, not one theologian, who is not ready to affirm that a perfect being is one who knows all things intuitively without having to go through reasoning, abstraction and generalisation. (The Creative Mind, pp. 108-9)

In the end, despite their obvious utility, the conceptions of philosophy and science tend “to weaken our concrete vision of the universe.” (p. 111) But we clearly do not have God-like powers to perceive everything, and we are not likely to get such powers. So what do we do? Bergson argues that instead of “trying to rise above our perception of things” through concepts, we “plunge into [perception] for the purpose of deepening it and widening it.” (p. 111) But how exactly are we to do this?

Enlarging Perception

There is one group of people, argues Bergson, that have mastered the ability to deepen and widen perception: artists. From paintings to poetry to novels and musical compositions, artists are able to show us things and events that we do not directly perceive and evoke a mood within us that we can understand even if the particular form that the artist presents may never have been seen or heard by us before. Bergson writes that artists are idealists who are often absent-mindedly detached from “reality.” But it is precisely because artists are detached from everyday living that they are able to see things that ordinary, practical people do not:

[Our] perception . . . isolates that part of reality as a whole that interests us; it shows us less the things themselves than the use we can make of them. It classifies, it labels them beforehand; we scarcely look at the object, it is enough for us to know which category it belongs to. But now and then, by a lucky accident, men arise whose senses or whose consciousness are less adherent to life. Nature has forgotten to attach their faculty of perceiving to their faculty of acting. When they look at a thing, they see it for itself, and not for themselves. They do not perceive simply with a view to action; they perceive in order to perceive — for nothing, for the pleasure of doing so. In regard to a certain aspect of their nature, whether it be their consciousness or one of their senses, they are born detached; and according to whether this detachment is that of a particular sense, or of consciousness, they are painters or sculptors, musicians or poets. It is therefore a much more direct vision of reality that we find in the different arts; and it is because the artist is less intent on utilizing his perception that he perceives a greater number of things. (The Creative Mind, p. 114)

The Method of Intuition

Bergson argued that the indivisible flow of time and the holistic nature of reality required an intuitive approach, that is “the sympathy by which one is transported into the interior of an object in order to coincide with what there is unique and consequently inexpressible in it.” (The Creative Mind, p. 135) Analysis, as in the scientific disciplines, breaks down objects into elements, but this method of understanding is a translation, an insight that is less direct and holistic than intuition. The intuition comes first, and one can pass from intuition to analysis but not from analysis to intuition.

In his essay on the French philosopher Ravaisson, Bergson underscored the benefits and necessity of an intuitive approach:

[Ravaisson] distinguished two different ways of philosophizing. The first proceeds by analysis; it resolves things into their inert elements; from simplification to simplification it passes to what is most abstract and empty. Furthermore, it matters little whether this work of abstraction is effected by a physicist that we may call a mechanist or by a logician who professes to be an idealist: in either case it is materialism. The other method not only takes into account the elements but their order, their mutual agreement and their common direction. It no longer explains the living by the dead, but, seeing life everywhere, it defines the most elementary forms by their aspiration toward a higher form of life. It no longer brings the higher down to the lower, but on the contrary, the lower to the higher. It is, in the real sense of the word, spiritualism. (p. 202)

From Philosophy to Religion

A religious tendency is apparent in Bergson’s philosophical writings, and this tendency grew more pronounced as Bergson grew older. It is likely that Bergson saw religion as a form of perceptual knowledge of the Good, widened by imagination. Bergson’s final major work, The Two Sources of Morality and Religion (Notre Dame, IN: University of Notre Dame Press, 1977) was both a philosophical critique of religion and a religious critique of philosophy, while acknowledging the contributions of both forms of knowledge. Bergson drew a distinction between “static religion,” which he believed originated in social obligations to society, and “dynamic religion,” which he argued originated in mysticism and put humans “in the stream of the creative impetus.” (The Two Sources of Morality and Religion, p. 179)

Bergson was a harsh critic of the superstitions of “static religion,” which he called a “farrago of error and folly.” These superstitions were common in all cultures, and originated in human imagination, which created myths to explain natural events and human history. However, Bergson noted, static religion did play a role in unifying primitive societies and creating a common culture within which individuals would subordinate their interests to the common good of society. Static religion created and enforced social obligations, without which societies could not endure. Religion also provided comfort against the depressing reality of death. (The Two Source of Morality and Religion, pp. 102-22)

In addition, it would be a mistake, Bergson argued, to suppose that one could obtain dynamic religion without the foundation of static religion. Even the superstitions of static religion originated in the human perception of a beneficent virtue that became elaborated into myths. Perhaps thinking that a cool running spring or a warm fire on the hearth as the actions of spirits or gods were a case of imagination run rampant, but these were still real goods, as were the other goods provided by the pagan gods.

Dynamic religion originated in static religion, but also moved above and beyond it, with a small number of exceptional human beings who were able to reach the divine source: “In our eyes, the ultimate end of mysticism is the establishment of a contact . . . with the creative effort which life itself manifests. This effort is of God, if it is not God himself. The great mystic is to be conceived as an individual being, capable of transcending the limitations imposed on the species by its material nature, thus continuing and extending the divine action.” (pp. 220-21)

In Bergson’s view, mysticism is intuition turned inward, to the “roots of our being , and thus to the very principle of life in general.” (p. 250) Rational philosophy cannot fully capture the nature of mysticism, because the insights of mysticism cannot be captured in words or symbols, except perhaps in the word “love”:

God is love, and the object of love: herein lies the whole contribution of mysticism. About this twofold love the mystic will never have done talking. His description is interminable, because what he wants to describe is ineffable. But what he does state clearly is that divine love is not a thing of God: it is God Himself. (p. 252)

Even so, just as the dynamic religion bases its advanced moral insights in part on the social obligations of static religion, dynamic religion also must be propagated through the images and symbols supplied by the myths of static religion. (One can see this interplay of static and dynamic religion in Jesus and Gandhi, both of whom were rooted in their traditional religions, but offered original teachings and insights that went beyond their traditions.)

Toward the end of his life, Henri Bergson strongly considered converting to Catholicism (although the Church had already placed three of Bergson’s works on its Index of Prohibited Books). Bergson saw Catholicism as best representing his philosophical inclinations for knowing through perception and intuition, and for joining the vital impetus responsible for creation. However, Bergson was Jewish, and the anti-Semitism of 1930s and 1940s Europe made him reluctant to officially break with the Jewish people. When the Nazis conquered France in 1940 and the Vichy puppet government of France decided to persecute Jews, Bergson registered with the authorities as a Jew and accepted the persecutions of the Vichy regime with stoicism. Bergson died in 1941 at the age of 81.

Once among the most celebrated intellectuals in the world, today Bergson is largely forgotten. Even among French philosophers, Bergson is much less known than Descartes, Sartre, Comte, and Foucault. It is widely believed that Bergson lost his debate with Einstein in 1922 on the nature of time. (See Jimena Canales, The Physicist and the Philosopher: Einstein, Bergson, and the Debate that Changed Our Understanding of Time, p. 6) But it is recognized today even among physicists that while Einstein’s conception of spacetime in relativity theory is an excellent theory for predicting the motion of objects, it does not disprove the existence of time and real change. It is also true that Bergson’s writings are extraordinarily difficult to understand at times. One can go through pages of dense, complex text trying to understand what Bergson is saying, get suddenly hit with a colorful metaphor that seems to explain everything — and then have a dozen more questions about the meaning of the metaphor. Nevertheless, Bergson remains one of the very few philosophers who looked beyond eternal forms to the reality of a dynamic universe, a universe moved by a vital impetus always creating, always changing, never resting.

The Metaphor of “Mechanism” in Science

The writings of science make frequent use of the metaphor of “mechanism.” The universe is conceived as a mechanism, life is a mechanism, and even human consciousness has been described as a type of mechanism. If a phenomenon is not an outcome of a mechanism, then it is random. Nearly everything science says about the universe and life falls into the two categories of mechanism and random chance.

The use of the mechanism metaphor is something most of us hardly ever notice. Science, allegedly, is all about literal truth and precise descriptions. Metaphors are for poetry and literature. But in fact mathematics and science use metaphors. Our understandings of quantity, space, and time are based on metaphors derived from our bodily experiences, as George Lakoff and Rafael Nunez have pointed out in their book Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being  Theodore L. Brown, a professor emeritus of chemistry at the University of Illinois at Urbana-Champaign, has provided numerous examples of scientific metaphors in his book, Making Truth: Metaphor in Science. Among these are the “billiard ball” and “plum pudding” models of the atom, as well as the “energy landscape” of protein folding. Scientists envision cells as “factories” that accept inputs and produce goods. The genetic structure of DNA is described as having a “code” or “language.” The term “chaperone proteins” was invented to describe proteins that have the job of assisting other proteins to fold correctly.

What I wish to do in this essay is closely examine the use of the mechanism metaphor in science. I will argue that this metaphor has been extremely useful in advancing our knowledge of the natural world, but its overuse as a descriptive and predictive model has led us down the wrong path to fully understanding reality — in particular, understanding the actual nature of life.

____________________________

Thousands of years ago, human beings attributed the actions of natural phenomena to spirits or gods. A particular river or spring or even tree could have its own spirit or minor god. Many humans also believed that they themselves possessed a spirit or soul which occupied the body, gave the body life and motion and intelligence, and then departed when the body died. According to the Bible, Genesis 2:7, when God created Adam from the dust of the ground, God “breathed into his nostrils the breath of life; and man became a living soul.” Knowing very little of biology and human anatomy, early humans were inclined to think that spirit/breath gave life to material bodies; and when human bodies no longer breathed, they were dead, so presumably the “spirit” went someplace else. The ancient Hebrews also saw a role for blood in giving life, which is why they regarded blood as sacred. Thus, the Hebrews placed many restrictions on the consumption and handling of blood when they slaughtered animals for sacrifice and food. These views about the spiritual aspects of breath and blood are also the historical basis of “vitalism,” the theory that life consists of more than material parts, and must somehow be based on a vital principle, spark, or force, in addition to matter. 

The problem with the vitalist outlook is that it did not appreciably advance our knowledge of nature and the human body.  The idea of a vital principle or force was too vague and could not be tested or measured or even observed. Of course, humans did not have microscopes thousands of years ago, so we could not see cells and bacteria, much less atoms.

By the 17th century, thinkers such as Thomas Hobbes and Rene Descartes proposed that the universe and even life forms were types of mechanisms, consisting of many parts that interacted in such a way as to result in predictable patterns. The universe was often analogized to a clock. (The first mechanical clock was developed around 1300 A.D., but water clocks, based on the regulated flow of water, have been in use for thousands of years.) The great French scientist Pierre-Simon Laplace was an enthusiast for the mechanist viewpoint and even argued that the universe could be regarded as completely determined from its beginnings:

We may regard the present state of the universe as the effect of the past and the cause of the future. An intellect which at any given moment knew all of the forces that animate nature and the mutual positions of the beings that compose it, if this intellect were vast enough to submit the data to analysis, could condense into a single formula the movement of the greatest bodies of the universe and that of the lightest atom; for such an intellect nothing could be uncertain and the future just like the past would be present before its eyes. (A Philosophical Essay on Probabilities, Chapter Two)

Laplace’s radical determinism was not embraced by all scientists, but it was a common view among many scientists. Later, as the science of biology developed, it was argued that the evolution of life was not as determined as the motion of the planets. Rather, random genetic mutations resulted in new life forms and “natural selection” determined that fit life forms flourished and reproduced, while unfit forms died out. In this view, physical mechanisms combined with random chance explained evolution.

The astounding advances in physics and biology in the past centuries certainly seem to justify the mechanism metaphor. Reality does seem to consist of various parts that interact in predictable cause-and-effect patterns. We can predict the motions of objects in space, and build technologies that send objects in the right direction and speed to the right target. We can also methodically trace illnesses to a dysfunction in one or more parts of the body, and this dysfunction can often be treated by medicine or surgery.

But have we been overusing the mechanism metaphor? Does reality consist of nothing but determined and predictable cause-and-effect patterns with an element of random chance mixed in?

I believe that we can shed some light on this subject by first examining what mechanisms are — literally — and then examine what resemblances and differences there are between mechanisms and the actual universe, between mechanisms and actual life.

____________________

 

Even in ancient times, human beings created mechanisms, from clocks to catapults to cranes to odometers. The Antikythera mechanism of ancient Greece, constructed around 100 B.C., was a sophisticated mechanism with over 30 gears that was able to predict astronomical motions and is considered to be one of the earliest computers. Below is a photo of a fragment of the mechanism, discovered in an ocean shipwreck in 1901:

 

Over subsequent centuries, human civilization created steam engines, propeller-driven ships, automobiles, airplanes, digital watches, computers, robots, nuclear reactors, and spaceships.

So what do most or all of these mechanisms have in common?

  1. Regularity and Predictability. Mechanisms have to be reliable. They have to do exactly what you want every time. Clocks can’t run fast, then run slow; automobiles can’t unilaterally change direction or speed; nuclear reactors can’t overheat on a whim; computers have to give the right answer every time. 
  2. Precision. The parts that make up a mechanism must fit together and move together in precise ways, or breakdown, or even disaster, will result. Engineering tolerances are typically measured in millimeters.
  3. Stability and Durability. Mechanisms are often made of metal, and for good reason. Metal can endure extreme forces and temperatures, and, if properly maintained, can last for many decades. Metal can slightly expand and contract depending on temperature, and metals can have some flexibility when needed, but metallic constructions are mostly stable in shape and size. 
  4. Unfree/Determined. Mechanisms are built by humans for human purposes. When you manage the controls of a mechanism correctly, the results are predictable. If you get into your car and decide to drive north, you will drive north. The car will not dispute you or override your commands, unless it is programmed to override your commands, in which case it is simply following a different set of instructions. The car has no will of its own. Human beings would not build mechanisms if such mechanisms acted according to their own wills. The idea of a self-willing mechanism is prolific in science fiction, but not in science.
  5. They do not grow. Mechanisms do not become larger over time or change their basic structure like living organisms. This would be contrary to the principle of durability/stability. Mechanisms are made for a purpose, and if there is a new purpose, a new mechanism will be made.
  6. They do not reproduce. Mechanisms do not have the power of reproduction. If you put a mechanism into a resource-rich environment, it will not consume energy and materials and give birth to new mechanisms. Only life has this power. (A partial exception can be made in the case of  computer “viruses,” which are lines of code programmed to duplicate themselves, but the “viruses” are not autonomous — they do the bidding of the programmer.)
  7. Random events lead to the universal degradation of mechanisms, not improvement. According to neo-Darwinism, random mutations in the genes of organisms are what is responsible for evolution; in most cases, mutations are harmful, but in some cases, they lead to improvement, leading to new and more complex organisms, ultimately culminating in human beings. So what kind of random mutations (changes) lead to improved mechanisms? None, really. Mechanisms change over time with random events, but these events lead to degradation of mechanisms, not improvement. Rust sets in, different parts break, electric connections fail, lubricating fluids leak. If you leave a set of carefully-preserved World War One biplanes out in a field, without human intervention, they will not eventually evolve into jet planes and rocket ships. They will just break down. Likewise, electric toasters will not evolve into supercomputers, no matter how many millions of years you wait. Of course, organisms also degrade and die, but they have the power of reproduction, which continues the population and creates opportunities for improvement.

There is one hypothetical mechanism that, if constructed, could mimic actual organisms: a self-replicating machine. Such a machine could conceivably contain plans within itself to gather materials and energy from its environment and use these materials and energy to construct copies of itself, growing exponentially in numbers as more and more machines reproduce themselves. Such machines could even be programmed to “mutate,” creating variations in its descendants. However, no such mechanism has yet been produced. Meanwhile, primitive single-celled life forms on earth have been successfully reproducing for four billion years.

Now, let’s compare mechanisms to life forms. What are the characteristics of life?

  1. Adaptability/Flexibility. The story of life on earth is a story of adaptability and flexibility. The earliest life forms, single cells, apparently arose in hydrothermal vents deep in the ocean. Later, some of these early forms evolved into multi-cellular creatures, which spread throughout the oceans. After 3.5 billion years, fish emerged, and then much later, the first land creatures. Over time, life adapted to different environments: sea, land, rivers, caves, air; and also to different climates, from the steamiest jungles to frozen environments. 
  2. Creativity/Diversification. Life is not only adaptive, it is highly creative and branches into the most diverse forms over time. Today, there are millions of species. Even in the deepest parts of the ocean, life forms thrive in an environment with pressures that would crush most life forms. There are bacteria that can live in water at or near the boiling point. The tardigrade can survive the cold, hostile vacuum of space. The bacteria Deinococcus radiodurans is able to survive extreme forms of radiation by means of one of the most efficient DNA repair capabilities ever seen. Now it’s true that among actual mechanisms there is also a great variety; but these mechanisms are not self-created, they are created by humans and retain their forms unless specifically modified by humans.
  3. Drives toward cooperation / symbiosis. Traditional Darwinist views of evolution see life as competition and “survival of the fittest.” However, more recent theorists of evolution point to the strong role of cooperation in the emergence and survival of advanced life forms. Biologist Lynn Margulis has argued that the most fundamental building block of advanced organisms, the cell, was the result of a merger between more primitive bacteria billions of years ago. By merging, each bacterium lent a particular biological advantage to the other, and created a more advanced life form. This theory was regarded with much skepticism at the time it was proposed, but over time it became widely accepted.  Today, only about half of the human body is made up of human cells — the other half consists of trillions of microbes and quadrillions of viruses that largely live in harmony with human cells. Contrary to the popular view that microbes and viruses are threats to human beings, most of these microbes and viruses are harmless or even beneficial to humans. Microbes are essential in digesting food and synthesizing vitamins, and even the human immune system is partly built and partly operated by microbes!  By contrast, the parts of a mechanism don’t naturally come together to form the mechanism; they are forced together by their manufacturer.
  4. Growth. Life is characterized by growth. All life forms begin with either a single cell, or the merger of two cells, after which a process of repeated division begins. In multicellular organisms, the initial cell eventually becomes an embryo; and when that embryo is born, becoming an independent life form, it continues to grow. In some species, that life form develops into an animal that can weigh hundreds or even thousands of pounds. This, from a microscopic cell! No existing mechanism is capable of that kind of growth.
  5. Reproduction. Mechanisms eventually disintegrate, and life forms die. But life forms have the capability of reproducing and making copies of themselves, carrying on the line. In an environment with adequate natural resources, the number of life forms can grow exponentially. Mechanisms have not mastered that trick.
  6. Free will/choice. Mechanisms are either under direct human control, are programmed to do certain things, or perform in a regular pattern, such as a clock. Life forms, in their natural settings, are free and have their own purposes. There are some regular patterns — sleep cycles, mating seasons, winter migration. But the day-to-day movements and activities of life forms are largely unpredictable. They make spur-of-the-moment decisions on where to search for food, where to find shelter, whether to fight or flee from predators, and which mate is most acceptable. In fact, the issue of mate choice is one of the most intriguing illustrations of free will in life forms — there is evidence that species may select mates for beauty over actual fitness, and human egg cells even play a role in selecting which sperm cells will be allowed to penetrate them.
  7. Able to gather energy from its environment. Mechanisms require energy to work, and they acquire such energy from wound springs or weights (in clocks), electrical outlets, batteries, or fuel. These sources of energy are provided by humans in one way or another. But life forms are forced to acquire energy on their own, and even the most primitive life forms mastered this feat billions of years ago. Plants get their energy from the sun, and animals get their energy from plants or other animals. It’s true that some mechanisms, such as space probes, can operate on their own for many years while drawing energy from solar panels. But these panels were invented and produced by humans, not by mechanisms.
  8. Self-organizing. Mechanisms are built, but life forms are self-organizing. Small components join other small components, forming a larger organization; this larger organization gathers together more components. There is a gradual growth and differentiation of functions — digestion, breathing, brain and nervous system, mobility, immune function. Now this process is very, very slow: evolution takes place over hundreds of millions of years. But mechanisms are not capable of self-organization. 
  9. Capacity for healing and self-repair. When mechanisms are broken, or not working at full potential, a human being intervenes to fix the mechanism. When organisms are injured or infected, they can self-repair by initiating multiple processes, either simultaneously or in stages: immune cells fight invaders; blood cells clot in open wounds to stop bleeding; dead tissues and cells are removed by other cells; and growth hormones are released to begin the process of building new tissue. As healing nears completion, cells originally sent to repair the wound are removed or modified. Now self-repair is not always adequate, and organisms die all the time from injury or infection. But they would die much sooner, and probably a species would not persist at all, without the means of self-repair. Even the existing medications and surgery that modern science has developed largely work with and supplement the body’s healing capacities — after all, surgery would be unlikely to work in most cases without the body’s means of self-repair after the surgeon completes cutting and sewing.

______________________

 

The mechanism metaphor served a very useful purpose in the history of science, by spurring humanity to uncover the cause-and-effect patterns responsible for the motions of stars and planets and the biological functions of life. We can now send spacecraft to planets; we can create new chemicals to improve our lives; we now know that illness is the result of a breakdown in the relationship between the parts of a living organism; and we are getting better and better in figuring out which human parts need medication or repair, so that lifespans and general health can be extended.

But if we are seeking the broadest possible understanding of what life is, and not just the biological functions of life, we must abandon the mechanism metaphor as inadequate and even deceptive. I believe the mechanism metaphor misses several major characteristics of life:

  1. Change. Whether it is growth, reproduction, adaptation, diversification, or self-repair, life is characterized by change, by plasticity, flexibility, and malleability. 
  2. Self-Driven Progress. There is clearly an overall improvement in life forms over time. Changes in species may take place over millions or billions of years, but even so, the differences between a single-celled animal and contemporary multicellular creatures are astonishingly large. It is not just a question of “complexity,” but of capability. Mammals, reptiles, and birds have senses, mobility, and intelligence that single-celled creatures do not have.
  3. Autonomy and freedom. Although some scientists are inclined to think of living creatures, including humans, as “gene machines,” life forms can’t be easily analogized to pre-programmed machines. Certainly, life forms have goals that they pursue — but the pursuit of these goals in an often hostile environment requires numerous spur-of-the-moment decisions that do not lead to the predictable outcomes we expect of mechanisms.

Robert Pirsig, author of Zen and the Art of Motorcycle Maintenance, argues in Lila that the fundamental nature of life is its ability to move away from mechanistic patterns, and science has overlooked this fact because scientists consider it their job to look for mechanisms:

Mechanisms are the enemy of life. The more static and unyielding the mechanisms are, the more life works to evade them or overcome them. The law of gravity, for example, is perhaps the most ruthlessly static pattern of order in the universe. So, correspondingly, there is no single living thing that does not thumb its nose at that law day in and day out. One could almost define life as the organized disobedience of the law of gravity. One could show that the degree to which an organism disobeys this law is a measure of its degree of evolution. Thus, while the simple protozoa just barely get around on their cilia, earthworms manage to control their distance and direction, birds fly into the sky, and man goes all the way to the moon. . . .  This would explain why patterns of life [in evolution] do not change solely in accord with causative ‘mechanisms’ or ‘programs’ or blind operations of physical laws. They do not just change valuelessly. They change in ways that evade, override and circumvent these laws. The patterns of life are constantly evolving in response to something ‘better’ than that which these laws have to offer. (Lila, 1991 hardcover edition, p. 143)

But if the “mechanism” metaphor is inadequate, what are some alternative conceptualizations and metaphors that can retain the previous advances of science while deepening our understanding and helping us make new discoveries? I will discuss this issue in the next post.

Next: Beyond the “Mechanism” Metaphor in Biology

 

Is Truth a Type of Good?

[T]ruth is one species of good, and not, as is usually supposed, a category distinct from good, and co-ordinate with it. The true is the name of whatever proves itself to be good in the way of belief. . . .” – William James,  “What Pragmatism Means

Truth is a static intellectual pattern within a larger entity called Quality.” – Robert Prisig, Lila

 

Does it make sense to think of truth as a type of good? The initial reaction of most people to this claim is negative, sometimes strongly so. Surely what we like and what is true are two different things. The reigning conception of truth is known as the “correspondence theory of truth,” which argues simply that in order for a statement to be true it must correspond to reality. In this view, the words or concepts or claims we state must match real things or events, and match them exactly, whether those things are good or not.

The American philosopher William James (1842-1910) acknowledged that our ideas must agree with reality in order to be true. But where he parted company with most of the rest of the world was in what it meant for an idea to “agree.” In most cases, he argued, ideas cannot directly copy reality. According to James, “of many realities our ideas can only be symbols and not copies. . . . Any idea that helps us to deal, whether practically or intellectually, with either the reality or its belongings, that doesn’t entangle our progress in frustrations, that fits, in fact, and adapts our life to the reality’s whole setting, will agree sufficiently to meet the requirement.” He also argued that “True ideas are those we can assimilate, validate, corroborate, and verify.” (“Pragmatism’s Conception of Truth“) Many years later, Robert Pirsig argued in Zen and the Art of Motorcycle Maintenance and Lila that the truths of human knowledge, including science, were developed out of an intuitive sense of good or “quality.”

But what does this mean in practice? Many truths are unpleasant, and reality often does not match our desires. Surely truth should correspond to reality, not what is good.

One way of understanding what James and Pirsig meant is to examine the origins and development of language and mathematics. We use written language and mathematics as tools to make statements about reality, but the tools themselves do not merely “copy” or even strictly correspond to reality. In fact, these tools should be understood as symbolic systems for communication and understanding. In the earliest stages of human civilization, these symbolic systems did try to copy or correspond to reality; but the strict limitations of “corresponding” to reality was in fact a hindrance to the truth, requiring new creative symbols that allowed knowledge to advance.

 

_______________________________

 

The first written languages consisted of pictograms, that is, drawn depictions of actual things — human beings, stars, cats, fish, houses. Pictograms had one big advantage: by clearly depicting the actual appearance of things, everyone could quickly understand them. They were the closest thing to a universal language; anyone from any culture could understand pictograms with little instruction.

However, there were some pretty big disadvantages to the use of pictograms as a written language. Many of the things we all see in everyday life can be clearly communicated through drawings. But there are a lot of ideas, actions, abstract concepts, and details that are not so easily communicated through drawings. How does one depict activities such as running, hunting, fighting, and falling in love, while making it clear that one is communicating an activity and not just a person? How does one depict a tribe, kingdom, battle, or forest, without becoming bogged down in drawing pictograms of all the persons and objects involved? How does one depict attributes and distinguish between specific types of people and specific types of objects? How does one depict feelings, emotions, ideas, and categories? Go through a dictionary at random sometime and see how many words can be depicted in a clear pictogram. There are not many. There is also the problem of differences in artistic ability and the necessity of maintaining standards. Everyone may have a different idea of what a bird looks like and different abilities in drawing a bird.

These limitations led to an interesting development in written language: over hundreds or thousands of years, pictograms became increasingly abstract, to the point at which their form did not copy or correspond to what they represented at all. This development took place across civilizations, as seen is this graphic, in which the top pictograms represent the earliest forms and the bottom ones coming later:

(Source: Wikipedia, https://en.wikipedia.org/wiki/History_of_writing)

Eventually, pictograms were abandoned by most civilizations altogether in favor of alphabets. By using combinations of letters to represent objects and ideas, it became easier for people to learn how to read and write. Instead of having to memorize tens of thousands of pictograms, people simply needed to learn new combinations of letters/sounds. No artistic ability was required.

One could argue that this development in writing systems does not address the central point of the correspondence theory of truth, that a true statement must correspond to reality. In this theory, it is perfectly OK for an abstract symbol to represent something. If someone writes “I caught a fish,” it does not matter if the person draws a fish or uses abstract symbols for a fish, as long as this person, in reality, actually did catch a fish. From the pragmatic point of view, however, the evolution of human symbolic systems toward abstraction is a good illustration of pragmatism’s main point: by making our symbolic systems better, human civilizations were able to communicate more, understand more, educate more, and acquire more knowledge. Pictograms fell short in helping us “deal with reality,” and that’s why written language had to advance above and beyond pictograms.

 

Let us turn to mathematics. The earliest humans were aware of quantities, but tended to depicted quantities in a direct and literal manner. For small quantities, such as two, the ancient Egyptians would simply draw two pictograms of the object. Nothing could correspond to reality better than that. However, for larger quantities, it was hard, tedious work to draw the same pictogram over and over. So early humans used tally marks or hash marks to indicate quantities, with “four” represented as four distinct marks:  | | | | and then perhaps a symbol or pictogram of the object. Again, these earliest depictions of numbers were so simple and direct, the correspondence to reality so obvious, that they were easily understood by people from many different cultures.

In retrospect, tally marks appear to be very primitive and hardly a basis for a mathematical system. However, I argue that tally marks were actually a revolutionary advance in how human beings understood quantities — because for the first time, quantity became an abstraction disconnected from particular objects. One did not have to make distinctions between three cats, three kings, or three bushels of grain; the quantity “three” could be understood on its own, without reference to what it was representing. Rather than drawing three cats, three kings, or three bushels of grain, one could use | | |  to represent any group of three objects.

The problem with tally marks, of course, was that this system could not easily handle large quantities or permit complex calculations. So, numerals were invented. The ancient Egyptian numeral system used tally marks for numbers below ten, but then used other symbols for larger quantities: ten, hundred, thousand, and so forth.

The ancient Roman numeral system also evolved out of tally marks, with | | | or III representing “three,” but with different symbols for five (V), ten (X), fifty (L), hundred (C), five hundred (D), and thousand (M). Numbers were depicted by writing the largest numerical symbols on the left and the smallest to the right, adding the symbols together to get the quantity (example: 1350 = MCCCL); a smaller numerical symbol to the left of a larger numerical symbol required subtraction (example: IX = 9). As with the Egyptian system, Roman numerals were able to cope with large numbers, but rather than the more literal depiction offered by tally marks, the symbols were a more creative interpretation of quantity, with implicit calculations required for proper interpretation of the number.

The use of numerals by ancient civilizations represented a further increase in the abstraction of quantities. With numerals, one could make calculations of almost any quantity of any objects, even imaginary objects or no objects. Teachers instructed children how to use numerals and how to make calculations, usually without any reference to real-world objects. A minority of intellectuals studied numbers and calculations for many years, developing general theorems about the relationships between quantities. And before long, the power and benefits of mathematics became such that mathematicians became convinced that mathematics were the ultimate reality of the universe, and not the actual objects we once attached to numbers. (On the theory of “mathematical Platonism,” see this post.)

For thousands of years, Roman numerals continued to be used. Rome was able to build and administer a great empire, while using these numerals for accounting, commerce, and engineering. In fact, the Romans were famous for their accomplishments in engineering. It was not until the 14th century that Europe began to discover the virtues of the Hindu-Arabic numeral system. And although it took centuries more, today the Hindu-Arabic system is the most widely-used system of numerals in the world.

Why is this?

The Hindu-Arabic system is noted for two major accomplishments: its positional decimal system and the number zero. The “positional decimal system” simply refers to a base 10 system in which the value of a digit is based upon it’s position. A single numeral may be multiplied by ten or one hundred or one thousand, depending on its position in the number. For example, the number 832 is:  8×100 + 3×10 + 2. We generally don’t notice this, because we spent years in school learning this system, and it comes to us automatically that the first digit “8” in 832 means 8 x 100. Roman numerals never worked this way. The Romans grouped quantities in symbols representing ones, fives, tens, fifties, one hundreds, etc. and added the symbols together. So the Roman version of 832 is DCCCXXXII (500 + 100 + 100 + 100 + 10+ 10 + 10 + 1 + 1).

Because the Roman numeral system is additive, adding Roman numbers is easy — you just combine all the symbols. But multiplication is harder, and division is even harder, because it’s not so easy to take apart the different symbols. In fact, for many calculations, the Romans used an abacus, rather than trying to write everything down. The Hindu-Arabic system makes multiplication and division easy, because every digit, depending on its placement, is a multiple of 1, 10, 100, 1000, etc.

The invention of the positional decimal system took thousands of years, not because ancient humans were stupid, but because symbolizing quantities and their relationships in a way that is useful is actually hard work and requires creative interpretation. You just don’t look at nature and say, “Ah, there’s the number 12, from the positional decimal system!”

In fact, even many of the simplest numbers took thousands of years to become accepted. The number zero was not introduced to Europe until the 11th century and it took several more centuries for zero to become widely used. Negative numbers did not appear in the west until the 15th century, and even then, they were controversial among the best mathematicians until the 18th century.

The shortcomings of seeing mathematical truths as a simple literal copying of reality become even clearer when one examines the origins and development of weights and measures. Here too, early human beings started out by picking out real objects as standards of measurement, only to find them unsuitable in the long run. One of the most well-known units of measurement in ancient times was the cubit, defined as the length of a man’s forearm from elbow to the tip of the middle finger. The foot was defined as the length of a man’s foot. The inch was the width of a man’s thumb. A basic unit of weight was the grain, that is, a single grain of barley or wheat. All of these measures corresponded to something real, but the problem, of course, was that there was a wide variation in people’s body parts, and grains could also vary in weight. What was needed was standardization; and it was not too long before governing authorities began to establish common standards. In many places throughout the world, authorities agreed that a single definition of each unit, based on a single object kept in storage, would be the standard throughout the land. The objects chosen were a matter of social convention, based upon convenience and usefulness. Nature or reality did not simply provide useful standards of measurement; there was too much variation even among the same types of objects provided by nature.

 

At this point, advocates of the correspondence theory of truth may argue, “Yes, human beings can use a variety of symbolic systems, and some are better than others. But the point is that symbolic systems should all represent the same reality. No matter what mathematical system you use, two plus two should still equal four.”

In response, I would argue that for very simple questions (2+2=4), the type of symbolic system you use will not make a big difference — you can use tally marks, Roman numerals, or Hindu-Arabic numerals. But the type of symbolic system you use will definitely make a difference in how many truths you can uncover and particularly how many complicated truths you can grasp. Without good symbolic systems, many truths will remain forever hidden from us.  As it was, the Roman numeral system was probably responsible for the lack of mathematical accomplishments of the Romans, even if their engineering was impressive for the time. And in any case, the pragmatic theory of truth already acknowledges that truth must agree with reality — it just cannot be a copy of reality. In the words of William James, an ideal symbolic system “helps us to deal, whether practically or intellectually, with either the reality or its belongings . . . doesn’t entangle our progress in frustrations, that fits, in fact, and adapts our life to the reality’s whole setting.”(“Pragmatism’s Conception of Truth“)

Zen and the Art of Science: A Tribute to Robert Pirsig

Author Robert Pirsig, widely acclaimed for his bestselling books, Zen and the Art of Motorcycle Maintenance (1974) and Lila (1991), passed away in his home on April 24, 2017. A well-rounded intellectual equally at home in the sciences and the humanities, Pirsig made the case that scientific inquiry, art, and religious experience were all particular forms of knowledge arising out of a broader form of knowledge about the Good or what Pirsig called “Quality.” Yet, although Pirsig’s books were bestsellers, contemporary debates about science and religion are oddly neglectful of Pirsig’s work. So what did Pirsig claim about the common roots of human knowledge, and how do his arguments provide a basis for reconciling science and religion?

Pirsig gradually developed his philosophy as response to a crisis in the foundations of scientific knowledge, a crisis he first encountered while he was pursuing studies in biochemistry. The popular consensus at the time was that scientific methods promised objectivity and certainty in human knowledge. One developed hypotheses, conducted observations and experiments, and came to a conclusion based on objective data. That was how scientific knowledge accumulated.

However, Pirsig noted that, contrary to his own expectations, the number of hypotheses could easily grow faster than experiments could test them. One could not just come up with hypotheses – one had to make good hypotheses, ones that could eliminate the need for endless and unnecessary observations and testing. Good hypotheses required mental inspiration and intuition, components that were mysterious and unpredictable.  The greatest scientists were precisely like the greatest artists, capable of making immense creative leaps before the process of testing even began.  Without those creative leaps, science would remain on a never-ending treadmill of hypothesis development – this was the “infinity of hypotheses” problem.  And yet, the notion that science depended on intuition and artistic leaps ran counter to the established view that the scientific method required nothing more than reason and the observation and recording of an objective reality.

Consider Einstein. One of history’s greatest scientists, Einstein hardly ever conducted actual experiments. Rather, he frequently engaged in “thought experiments,” imagining what it would be like to chase a beam of light, what it would feel like to be in a falling elevator, and what a clock would look like if the streetcar he was riding raced away from the clock at the speed of light.

One of the most fruitful sources of hypotheses in science is mathematics, a discipline which consists of the creation of symbolic models of quantitative relationships. And yet, the nature of mathematical discovery is so mysterious that mathematicians themselves have compared their insights to mysticism. The great French mathematician Henri Poincare believed that the human mind worked subliminally on problems, and his work habit was to spend no more than two hours at a time working on mathematics. Poincare believed that his subconscious would continue working on problems while he conducted other activities, and indeed, many of his great discoveries occurred precisely when he was away from his desk. John von Neumann, one of the best mathematicians of the twentieth century, also believed in the subliminal mind. He would sometimes go to sleep with a mathematical problem on his mind and wake up in the middle of the night with a solution. The Indian mathematical genius Srinivasa Ramanujan was a Hindu mystic who believed that solutions were revealed to him in dreams by the goddess Namagiri.

Intuition and inspiration were human solutions to the infinity-of-hypotheses problem. But Pirsig noted there was a related problem that had to be solved — the infinity of facts.  Science depended on observation, but the issue of which facts to observe was neither obvious nor purely objective.  Scientists had to make value judgments as to which facts were worth close observation and which facts could be safely overlooked, at least for the moment.  This process often depended heavily on an imprecise sense or feeling, and sometimes mere accident brought certain facts to scientists’ attention. What values guided the search for facts? Pirsig cited Poincare’s work The Foundations of Science. According to Poincare, general facts were more important than particular facts, because one could explain more by focusing on the general than the specific. Desire for simplicity was next – by beginning with simple facts, one could begin the process of accumulating knowledge about nature without getting bogged down in complexity at the outset. Finally, interesting facts that provided new findings were more important than facts that were unimportant or trivial. The point was not to gather as many facts as possible but to condense as much experience as possible into a small volume of interesting findings.

Research on the human brain supports the idea that the ability to value is essential to the discernment of facts.  Professor of Neuroscience Antonio Damasio, in his book Descartes’ Error: Emotion, Reason, and the Human Brain, describes several cases of human beings who lost the part of their brain responsible for emotions, either because of an accident or a brain tumor.  These persons, some of whom were previously known as shrewd and smart businessmen, experienced a serious decline in their competency after damage took place to the emotional center of their brains.  They lost their capacity to make good decisions, to get along with other people, to manage their time, or to plan for the future.  In every other respect, these persons retained their cognitive abilities — their IQs remained above normal and their personality tests resulted in normal scores.  The only thing missing was their capacity to have emotions.  Yet this made a huge difference.  Damasio writes of one subject, “Elliot”:

Consider the beginning of his day: He needed prompting to get started in the morning and prepare to go to work.  Once at work he was unable to manage his time properly; he could not be trusted with a schedule.  When the job called for interrupting an activity and turning to another, he might persist nonetheless, seemingly losing sight of his main goal.  Or he might interrupt the activity he had engaged, to turn to something he found more captivating at that particular moment.  Imagine a task involving reading and classifying documents of a given client.  Elliot would read and fully understand the significance of the material, and he certainly knew how to sort out the documents according to the similarity or disparity of their content.  The problem was that he was likely, all of a sudden, to turn from the sorting task he had initiated to reading one of those papers, carefully and intelligently, and to spend an entire day doing so.  Or he might spend a whole afternoon deliberating on which principle of categorization should be applied: Should it be date, size of document, pertinence to the case, or another?   The flow of work was stopped. (p. 36)

Why did the loss of emotion, which might be expected to improve decision-making by making these persons coldly objective, result in poor decision-making instead?  According to Damasio, without emotions, these persons were unable to value, and without value, decision-making in the face of infinite facts became hopelessly capricious or paralyzed, even with normal or above-normal IQs.  Damasio noted, “the cold-bloodedness of Elliot’s reasoning prevented him from assigning different values to different options, and made his decision-making landscape hopelessly flat.” (p. 51) Damasio discusses several other similar case studies.

So how would it affect scientific progress if all scientists were like the subjects Damasio studied, free of emotion, and therefore, hypothetically capable of perfect objectivity?  Well it seems likely that science would advance very slowly, at best, or perhaps not at all.  After all, the same tools for effective decision-making in everyday life are needed for the scientific enterprise as well. A value-free scientist would not only be unable to sustain the social interaction that science requires, he or she would be unable to develop a research plan, manage his or her time, or stick to a research plan.

_________

Where Pirsig’s philosophy becomes particularly controversial and difficult to understand is in his approach to the truth. The dominant view of truth today is known as the “correspondence” theory of truth – that is, any human statement that is true must correspond precisely to something objectively real. In this view, the laws of physics and chemistry are real because they correspond to actual events that can be observed and demonstrated. Pirsig argues on the contrary that in order to understand reality, human beings must invent symbolic and conceptual models, that there is a large creative component to these models (it is not just a matter of pure correspondence to reality), and that multiple such models can explain the same reality even if they are based on wholly different principles. Math, logic, and even the laws of physics are not “out there” waiting to be discovered – they exist in the mind, which doesn’t mean that these things are bad or wrong or unreal.

There are several reasons why our symbolic and conceptual models don’t correspond literally to reality, according to Pirsig. First, there is always going to be a gap between reality and the concepts we use to describe reality, because reality is continuous and flowing, while concepts are discrete and static. The creation of concepts necessarily calls for cutting reality into pieces, but there is no one right way to divide reality, and something is always lost when this is done. In fact, Pirsig noted, our very notions of subjectivity and objectivity, the former allegedly representing personal whims and the latter representing truth, rested upon an artificial division of reality into subjects and objects; in fact, there were other ways of dividing reality that could be just as legitimate or useful. In addition, concepts are necessarily static – they can’t be always changing or we would not be able to make sense of them. Reality, however, is always changing. Finally, describing reality is not always a matter of using direct and literal language but may require analogy and imaginative figures of speech.

Because of these difficulties in expressing reality directly, a variety of symbolic and conceptual models, based on widely varying principles, are not only possible but necessary – necessary for science as well as other forms of knowledge. Pirsig points to the example of the crisis that occurred in mathematics in the nineteenth century. For many centuries, it was widely believed that geometry, as developed by the ancient Greek mathematician Euclid, was the most exact of all of the sciences.  Based on a small number of axioms from which one could deduce multiple propositions, Euclidean geometry represented a nearly perfect system of logic.  However, while most of Euclid’s axioms were seemingly indisputable, mathematicians had long experienced great difficulty in satisfactorily demonstrating the truth of one of the chief axioms on which Euclidean geometry was based. This slight uncertainty led to an even greater crisis of uncertainty when mathematicians discovered that they could reverse or negate this axiom and create alternative systems of geometry that were every bit as logical and valid as Euclidean geometry.  The science of geometry was gradually replaced by the study of multiple geometries. Pirsig cited Poincare, who pointed out that the principles of geometry were not eternal truths but definitions and that the test of a system of geometry was not whether it was true but how useful it was.

So how do we judge the usefulness or goodness of our symbolic and conceptual models? Traditionally, we have been told that pure objectivity is the only solution to the chaos of relativism, in which nothing is absolutely true. But Pirsig pointed out that this hasn’t really been how science has worked. Rather, models are constructed according to the often competing values of simplicity and generalizability, as well as accuracy. Theories aren’t just about matching concepts to facts; scientists are guided by a sense of the Good (Quality) to encapsulate as much of the most important knowledge as possible into a small package. But because there is no one right way to do this, rather than converging to one true symbolic and conceptual model, science has instead developed a multiplicity of models. This has not been a problem for science, because if a particular model is useful for addressing a particular problem, that is considered good enough.

The crisis in the foundations of mathematics created by the discovery of non-Euclidean geometries and other factors (such as the paradoxes inherent in set theory) has never really been resolved. Mathematics is no longer the source of absolute and certain truth, and in fact, it never really was. That doesn’t mean that mathematics isn’t useful – it certainly is enormously useful and helps us make true statements about the world. It’s just that there’s no single perfect and true system of mathematics. (On the crisis in the foundations of mathematics, see the papers here and here.) Mathematical axioms, once believed to be certain truths and the foundation of all proofs, are now considered definitions, assumptions, or hypotheses. And a substantial number of mathematicians now declare outright that mathematical objects are imaginary, that particular mathematical formulas may be used to model real events and relationships, but that mathematics itself has no existence outside the human mind. (See The Mathematical Experience by Philip J. Davis and Reuben Hersh.)

Even some basic rules of logic accepted for thousands of years have come under challenge in the past hundred years, not because they are absolutely wrong, but because they are inadequate in many cases, and a different set of rules is needed. The Law of the Excluded Middle states that any proposition must be either true or false (“P” or “not P” in symbolic logic). But ever since mathematicians discovered propositions which are possibly true but not provable, a third category of “possible/unknown” has been added. Other systems of logic have been invented that use the idea of multiple degrees of truth, or even an infinite continuum of truth, from absolutely false to absolutely true.

The notion that we need multiple symbolic and conceptual models to understand reality remains controversial to many. It smacks of relativism, they argue, in which every person’s opinion is as valid as another person’s. But historically, the use of multiple perspectives hasn’t resulted in the abandonment of intellectual standards among mathematicians and scientists. One still needs many years of education and an advanced degree to obtain a job as a mathematician or scientist, and there is a clear hierarchy among practitioners, with the very best mathematicians and scientists working at the most prestigious universities and winning the highest awards. That is because there are still standards for what is good mathematics and science, and scholars are rewarded for solving problems and advancing knowledge. The fact that no one has agreed on what is the One True system of mathematics or logic isn’t relevant. In fact, physicist Stephen Hawking has argued:

[O]ur brains interpret the input from our sensory organs by making a model of the world. When such a model is successful at explaining events, we tend to attribute to it, and to the elements and concepts that constitute it, the quality of reality or absolute truth. But there may be different ways in which one could model the same physical situation, with each employing different fundamental elements and concepts. If two such physical theories or models accurately predict the same events, one cannot be said to be more real than the other; rather we are free to use whichever model is more convenient (The Grand Design, p. 7).

Among the most controversial and mind-bending claims Pirsig makes is that the very laws of nature themselves exist only in the human mind. “Laws of nature are human inventions, like ghosts,” he writes. Pirsig even remarks that it makes no sense to think of the law of gravity existing before the universe, that it only came into existence when Isaac Newton thought of it. It’s an outrageous claim, but if one looks closely at what the laws of nature actually are, it’s not so crazy an argument as it first appears.

For all of the advances that science has made over the centuries, there remains a sharp division of views among philosophers and scientists on one very important issue: are the laws of nature actual causal powers responsible for the origins and continuance of the universe or are the laws of nature summary descriptions of causal patterns in nature? The distinction is an important one. In the former view, the laws of physics are pre-existing or eternal and possess god-like powers to create and shape the universe; in the latter view, the laws have no independent existence – we are simply finding causal patterns and regularities in nature that allow us to predict and we call these patterns “laws.”

One powerful argument in favor of the latter view is that most of the so-called “laws of nature,” contrary to the popular view, actually have exceptions – and sometimes the exceptions are large. That is because the laws are simplified models of real phenomena. The laws were cobbled together by scientists in order to strike a careful balance between the values of scope, predictive accuracy, and simplicity. Michael Scriven, a mathematician and philosopher at Claremont Graduate University, has noted that as a result of this balance of values, physical laws are actually approximations that apply only within a certain range. This point has also been made more recently by Ronald Giere, a professor of philosophy at the University of Minnesota, in Science Without Laws and Nancy Cartwright of the University of California at San Diego in How the Laws of Physics Lie.

Newton’s law of universal gravitation, for example, is not really universal. It becomes increasingly inaccurate under conditions of high gravity and very high velocities, and at the atomic level, gravity is completely swamped by other forces. Whether one uses Newton’s law depends on the specific conditions and the level of accuracy one requires. Newton’s laws of motion also have exceptions, depending on the force, distance, and speed. Kepler’s laws of planetary motion are an approximation based on the simplifying assumption of a planetary system consisting of one planet. The ideal gas law is an approximation which becomes inaccurate under conditions of low temperature and/or high pressure. The law of multiple proportions works for simple molecular compounds, but often fails for complex molecular compounds. Biologists have discovered so many exceptions to Mendel’s laws of genetics that some believe that Mendel’s laws should not even be considered laws.

So if we think of laws of nature as being pre-existing, eternal commandments, with god-like powers to shape the universe, how do we account for these exceptions to the laws? The standard response by scientists is that their laws are simplified depictions of the real laws. But if that is the case, why not state the “real” laws? Because by the time we wrote down the real laws, accounting for every possible exception, we would have an extremely lengthy and detailed description of causation that would not recognizably be a law. The whole point of the laws of nature was to develop tools by which one could predict a large number of phenomena (scope), maintain a good-enough correspondence to reality (accuracy), and make it possible to calculate predictions without spending an inordinate amount of time and effort (simplicity). That is why although Einstein’s conception of gravity and his “field equations” have supplanted Newton’s law of gravitation, physicists still use Newton’s “law” in most cases because it is simpler and easier to use; they only resort to Einstein’s complex equations when they have to! The laws of nature are human tools for understanding, not mathematical gods that shape the universe. The actual practice of science confirms Pirsig’s point that the symbolic and conceptual models that we create to understand reality have to be judged by how good they are – simple correspondence to reality is insufficient and in many cases is not even possible anyway.

_____________

 

Ultimately, Pirsig concluded, the scientific enterprise is not that different from the pursuit of other forms of knowledge – it is based on a search for the Good. Occasionally, you see this acknowledged explicitly, when mathematicians discuss the beauty of certain mathematical proofs or results, as defined by their originality, simplicity, ability to solve many problems at once, or their surprising nature. Scientists also sometimes write about the importance of elegance in their theories, defined as the ability to explain as much as possible, as clearly as possible, and as simply as possible. Depending on the field of study, the standards of judgment may be different, the tools may be different, and the scope of inquiry is different. But all forms of human knowledge — art, rhetoric, science, reason, and religion — originate in, and are dependent upon, a response to the Good or Quality. The difference between science and religion is that scientific models are more narrowly restricted to understanding how to predict and manipulate natural phenomena, whereas religious models address larger questions of meaning and value.

Pirsig did not ignore or suppress the failures of religious knowledge with regard to factual claims about nature and history. The traditional myths of creation and the stories of various prophets were contrary to what we know now about physics, biology, paleontology, and history. In addition, Pirsig was by no means a conventional theist — he apparently did not believe that God was a personal being who possessed the attributes of omniscience and omnipotence, controlling or potentially controlling everything in the universe.

However, Pirsig did believe that God was synonymous with the Good, or “Quality,” and was the source of all things.  In fact, Pirsig wrote that his concept of Quality was similar to the “Tao” (the “Way” or the “Path”) in the Chinese religion of Taoism. As such, Quality was the source of being and the center of existence. It was also an active, dynamic power, capable of bringing about higher and higher levels of being. The evolution of the universe, from simple physical forms, to complex chemical compounds, to biological organisms, to societies was Dynamic Quality in action. The most recent stage of evolution – Intellectual Quality – refers to the symbolic models that human beings create to understand the universe. They exist in the mind, but are a part of reality all the same – they represent a continuation of the growth of Quality.

What many religions were missing, in Pirsig’s view, was not objectivity, but dynamism: an ability to correct old errors and achieve new insights. The advantage of science was its willingness and ability to change. According to Pirsig,

If scientists had simply said Copernicus was right and Ptolemy was wrong without any willingness to further investigate the subject, then science would have simply become another minor religious creed. But scientific truth has always contained an overwhelming difference from theological truth: it is provisional. Science always contains an eraser, a mechanism whereby new Dynamic insight could wipe out old static patterns without destroying science itself. Thus science, unlike orthodox theology, has been capable of continuous, evolutionary growth. (Lila, p. 222)

The notion that religion and orthodoxy go together is widespread among believers and secularists. But there is no necessary connection between the two. All religions originate in social processes of story-telling, dialogue, and selective borrowing from other cultures. In fact, many religions begin as dangerous heresies before they become firmly established — orthodoxies come later. The problem with most contemporary understandings of religion is that one’s adherence to religion is often measured by one’s commitment to orthodoxy and membership in religious institutions rather than an honest quest for what is really good.  A person who insists on the literal truth of the Bible and goes to church more than once a week is perceived as being highly religious, whereas a person not connected with a church but who nevertheless seeks religious knowledge wherever he or she can find it is considered less committed or even secular.  This prejudice has led many young people to identify as “spiritual, not religious,” but religious knowledge is not inherently about unwavering loyalty to an institution or a text. Pirsig believed that mysticism was a necessary component of religious knowledge and a means of disrupting orthodoxies and recovering the dynamic aspect of religious insight.

There is no denying that the most prominent disputes between science and religion in the last several centuries regarding the physical workings of the universe have resulted in a clear triumph for scientific knowledge over religious knowledge.  But the solution to false religious beliefs is not to discard religious knowledge — religious knowledge still offers profound insights beyond the scope of science. That is why it is necessary to recover the dynamic nature of religious knowledge through mysticism, correction of old beliefs, and reform. As Pirsig argued, “Good is a noun.” Not because Good is a thing or an object, but because Good  is the center and foundation of all reality and all forms of knowledge, whether we are consciously aware of it or not.

What Does Science Explain? Part 5 – The Ghostly Forms of Physics

The sciences do not try to explain, they hardly even try to interpret, they mainly make models. By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of such a mathematical construct is solely and precisely that it is expected to work — that is, correctly to describe phenomena from a reasonably wide area. Furthermore, it must satisfy certain esthetic criteria — that is, in relation to how much it describes, it must be rather simple. — John von Neumann (“Method in the Physical Sciences,” in The Unity of Knowledge, 1955)

Now we come to the final part of our series of posts, “What Does Science Explain?” (If you have not already, you can peruse parts 1, 2, 3, and 4 here). As I mentioned in my previous posts, the rise of modern science was accompanied by a change in humanity’s view of metaphysics, that is, our theory of existence. Medieval metaphysics, largely influenced by ancient philosophers, saw human beings as the center or summit of creation; furthermore, medieval metaphysics proposed a sophisticated, multifaceted view of causation. Modern scientists, however, rejected much of medieval metaphysics as subjective and saw reality as consisting mainly of objects impacting or influencing each other in mathematical patterns.  (See The Metaphysical Foundations of Modern Science by E.A. Burtt.)

I have already critically examined certain aspects of the metaphysics of modern science in parts 3 and 4. For part 5, I wish to look more closely at the role of Forms in causation — what Aristotle called “formal causation.” This theory of causation was strongly influenced by Aristotle’s predecessor Plato and his Theory of Forms. What is Plato’s “Theory of Forms”? In brief, Plato argued that the world we see around us — including all people, trees, and animals, stars, planets and other objects — is not the true reality. The world and the things in it are imperfect and perishable realizations of perfect forms that are eternal, and that continually give birth to the things we see. That is, forms are the eternal blueprints of perfection which the material world imperfectly represents. True philosophers do not focus on the material world as it is, but on the forms that material things imperfectly reflect. In order to judge a sculpture, painting, or natural setting, a person must have an inner sense of beauty. In order to evaluate the health of a particular human body, a doctor must have an idea of what a perfectly healthy human form is. In order to evaluate a government’s system of justice, a citizen must have an idea about what perfect justice would look like. In order to critically judge leaders, citizens must have a notion of the virtues that such a leader should have, such as wisdom, honesty, and courage.  Ultimately, according to Plato, a wise human being must learn and know the perfect forms behind the imperfect things we see: we must know the Form of Beauty, the Form of Justice, the Form of Wisdom, and the ultimate form, the Form of Goodness, from which all other forms flow.

Unsurprisingly, many intelligent people in the modern world regard Plato’s Theory of Forms as dubious or even outrageous. Modern science teaches us that sure knowledge can only be obtained by observation and testing of real things, but Plato tells us that our senses are deceptive, that the true reality is hidden behind what we sense. How can we possibly confirm that the forms are real? Even Plato’s student Aristotle had problems with the Theory of Forms and argued that while the forms were real, they did not really exist until they were manifested in material things.

However, there is one important sense in which modern science retained the notion of formal causation, and that is in mathematics. In other words, most scientists have rejected Plato’s Theory of Forms in all aspects except for Plato’s view of mathematics. “Mathematical Platonism,” as it is called, is the idea that mathematical forms are objectively real and are part of the intrinsic order of the universe. However, there are also sharp disagreements on this subject, with some mathematicians and scientists arguing that mathematical forms are actually creations of the human imagination.

The chief difference between Plato and modern scientists on the study of mathematics is this: According to Plato, the objects of geometry — perfect squares, perfect circles, perfect planes — existed nowhere in the material world; we only see imperfect realizations. But the truly wise studied the perfect, eternal forms of geometry rather than their imperfect realizations. Therefore, while astronomical observations indicated that planetary bodies orbited in imperfect circles, with some irregularities and errors, Plato argued that philosophers must study the perfect forms instead of the actual orbits! (The Republic, XXVI, 524D-530C) Modern science, on the other hand, is committed to observation and study of real orbits as well as the study of perfect mathematical forms.

Is it tenable to hold the belief that Plato and Aristotle’s view of eternal forms is mostly subjective nonsense, but they were absolutely right about mathematical forms being real? I argue that this selective borrowing of the ancient Greeks doesn’t quite work, that some of the questions and difficulties with proving the reality of Platonic forms also afflicts mathematical forms.

The main argument for mathematical Platonism is that mathematics is absolutely necessary for science: mathematics is the basis for the most important and valuable physical laws (which are usually in the form of equations), and everyone who accepts science must agree that the laws of nature or the laws of physics exist. However, the counterargument to this claim is that while mathematics is necessary for human beings to conduct science and understand reality, that does not mean that mathematical objects or even the laws of nature exist objectively, that is, outside of human minds.

I have discussed some of the mysterious qualities of the “laws of nature” in previous posts (here and here). It is worth pointing out that there remains a serious debate among philosophers as to whether the laws of nature are (a) descriptions of causal regularities which help us to predict or (b) causal forces in themselves. This is an important distinction that most people, including scientists, don’t notice, although the theoretical consequences are enormous. Physicist Kip Thorne writes that laws “force the Universe to behave the way it does.” But if laws have that kind of power, they must be ubiquitous (exist everywhere), eternal (exist prior to the universe), and have enormous powers although they have no detectable energy or mass — in other words, the laws of nature constitute some kind of supernatural spirit. On the other hand, if laws are summary descriptions of causation, these difficulties can be avoided — but then the issue arises: do the laws of nature or of physics really exist objectively, outside of human minds, or are they simply human-constructed statements about patterns of causation? There are good reasons to believe the latter is true.

The first thing that needs to be said is that nearly all these so-called laws of nature are actually approximations of what really happens in nature, approximations that work only under certain restrictive conditions. Both of these considerations must be taken into account, because even the approximations fall apart outside of certain pre-specified conditions. Newton’s law of universal gravitation, for example, is not really universal. It becomes increasingly inaccurate under conditions of high gravity and very high velocities, and at the atomic level, gravity is completely swamped by other forces. Whether one uses Newton’s law depends on the specific conditions and the level of accuracy one requires. Kepler’s laws of planetary motion are an approximation based on the simplifying assumption of a planetary system consisting of one planet. The ideal gas law is an approximation which becomes inaccurate under conditions of low temperature and/or high pressure. The law of multiple proportions works for simple molecular compounds, but often fails for complex molecular compounds. Biologists have discovered so many exceptions to Mendel’s laws of genetics that some believe that Mendel’s laws should not even be considered laws.

The fact of the matter is that even with the best laws that science has come up with, we still can’t predict the motions of more than two interacting astronomical bodies without making unrealistic simplifying assumptions. Michael Scriven, a mathematician and philosopher at Claremont Graduate University, has concluded that the laws of nature or physics are actually cobbled together by scientists based on multiple criteria:

Briefly we may say that typical physical laws express a relationship between quantities or a property of systems which is the simplest useful approximation to the true physical behavior and which appears to be theoretically tractable. “Simplest” is vague in many cases, but clear for the extreme cases which provide its only use. “Useful” is a function of accuracy and range and purpose. (Michael Scriven, “The Key Property of Physical Laws — Inaccuracy,” in Current Issues in the Philosophy of Science, ed. Herbert Feigl)

The response to this argument is that it doesn’t disprove the objective existence of physical laws — it simply means that the laws that scientists come up with are approximations to real, objectively existing underlying laws. But if that is the case, why don’t scientists simply state what the true laws are? Because the “laws” would actually end up being extremely long and complex statements of causation, with so many conditions and exceptions that they would not really be considered laws.

An additional counterargument to mathematical Platonism is that while mathematics is necessary for science, it is not necessary for the universe. This is another important distinction that many people overlook. Understanding how things work often requires mathematics, but that doesn’t mean the things in themselves require mathematics. The study of geometry has given us pi and the Pythagorean theorem, but a child does not need to know these things in order to draw a circle or a right triangle. Circles and right triangles can exist without anyone, including the universe, knowing the value of pi or the Pythagorean theorem. Calculus was invented in order to understand change and acceleration; but an asteroid, a bird, or a cheetah is perfectly capable of changing direction or accelerating without needing to know calculus.

Even among mathematicians and scientists, there is a significant minority who have argued that mathematical objects are actually creations of the human imagination, that math may be used to model aspects of reality, but it does not necessarily do so. Mathematicians Philip J. Davis and Reuben Hersh argue that mathematics is the study of “true facts about imaginary objects.” Derek Abbot, a professor of engineering, writes that engineers tend to reject mathematical Platonism: “the engineer is well acquainted with the art of approximation. An engineer is trained to be aware of the frailty of each model and its limits when it breaks down. . . . An engineer . . . has no difficulty in seeing that there is no such a thing as a perfect circle anywhere in the physical universe, and thus pi is merely a useful mental construct.” (“The Reasonable Ineffectiveness of Mathematics“) Einstein himself, making a distinction between mathematical objects used as models and pure mathematics, wrote that “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” Hartry Field, a philosopher at New York University, has argued that mathematics is a useful fiction that may not even be necessary for science. Field goes to show that it is possible to reconstruct Newton’s theory of gravity without using mathematics. (There is more discussion on this subject here and here.)

So what can we conclude about the existence of forms? I have to admit that although I’m skeptical, I have no sure conclusions. It seems unlikely that forms exist outside the mind . . . but I can’t prove they don’t exist either. Forms do seem to be necessary for human reasoning — no thinking human can do without them. And forms seem to be rooted in reality: perfect circles, perfect squares, and perfect human forms can be thought of as imaginative projections of things we see, unlike Sherlock Holmes or fire-breathing dragons or flying spaghetti monsters, which are more creatively fictitious. Perhaps one could reconcile these opposing views on forms by positing that the human mind and imagination is part of the universe itself, and that the universe is becoming increasingly consciously aware.

Another way to think about this issue was offered by Robert Pirsig in Zen and the Art of Motorcycle Maintenance. According to Pirsig, Plato made a mistake by positing Goodness as a form. Even considered as the highest form, Goodness (or “Quality,” in Pirsig’s terminology) can’t really be thought of as a static thing floating around in space or some otherworldly realm. Forms are conceptual creations of humans who are responding to Goodness (Quality). Goodness itself is not a form, because it is not an unchanging thing — it is not static or even definable. It is “reality itself, ever changing, ultimately unknowable in any kind of fixed, rigid way.” (p. 342) Once we let go of the idea that Goodness or Quality is a form, we can realize that not only is Goodness part of reality, it is reality.

As conceptual creations, ideal forms are found in both science and religion. So why, then, does there seem to be such a sharp split between science and religion as modes of knowledge? I think it comes down to this: science creates ideal forms in order to model and predict physical phenomena, while religion creates ideal forms in order to provide guidance on how we should live.

Scientists like to see how things work — they study the parts in order to understand how the wholes work. To increase their understanding, scientists may break down certain parts into smaller parts, and those parts into even smaller parts, until they come to the most fundamental, indivisible parts. Mathematics has been extremely useful in modeling and understanding these parts of nature, so scientists create and appreciate mathematical forms.

Religion, on the other hand, tends to focus on larger wholes. The imaginative element of religion envisions perfect states of being, whether it be the Garden of Eden or the Kingdom of Heaven, as well as perfect (or near perfect) humans who serve as prophets or guides to a better life. Religion is less concerned with how things work than with how things ought to work, how things ought to be. So religion will tend to focus on subjects not covered by science, including the nature and meaning of beauty, love, and justice. There will always be debates about the appropriateness of particular forms in particular circumstances, but the use of forms in both science and religion is essential to understanding the universe and our place in it.

What Does Science Explain? Part 4 – The Ends of the Universe

Continuing my series of posts on “What Does Science Explain?” (parts 1, 2 , and 3 here), I wish today to discuss the role of teleological causation. Aristotle referred to teleology in his discussion of four causes as “final causation,” because it referred to the goals or ends of all things (the Greek word “telos” meaning “goal,” “purpose,” or “end.”) From a teleological viewpoint, an acorn grows into an oak tree, a bird takes flight, and a sculptor creates statues because these are the inherent and intended ends of the acorn, bird, and sculptor. Medieval metaphysics granted a large role for teleological causation in its view of the universe.

According to E.A. Burtt in The Metaphysics of Modern Science, the growth of modern science changed the idea of causation, focusing almost exclusively on efficient causation (objects impacting or affecting other objects). The idea of final (goal-oriented) causation was dismissed. And even though the early modern scientists such as Galileo and Newton believed in God, their notion of God was significantly different from the traditional medieval conception of God. Rather than seeing God as the Supreme Good, which continually draws all things to higher levels of being, early modern scientists reduced God to the First Efficient Cause, who merely started the mechanism of the universe and then let it run.

It was not unreasonable for early scientists to focus on efficient causation rather than final causation. It was often difficult to come up with testable hypotheses and workable predictive models by assuming long-term goals in nature. There was always a strong element of mystery about what the true ends of nature were and it was very difficult to pin down these alleged goals. Descartes believed in God, but also wrote that it was impossible to know what God’s goals were. For that reason, it is quite likely that science in its early stages needed to overcome medieval metaphysics in order to make its first great discoveries about nature. Focusing on efficient causation was simpler and apt to bring quicker results.

However, now that science has advanced over the centuries, it is worth revisiting the notion of teleological causation as a means of filling in gaps in our current understanding of nature. It is true that the concept of long-term goals for physical objects and forces often does not help very much in terms of developing useful, short-term predictive models. But final causation can help make sense of long-term patterns which may not be apparent when making observations over short periods of time. Processes that look purposeless and random in the short-term may actually be purposive in the long-term. We know that an acorn under the right conditions will eventually become an oak tree, because the process and the outcome of development can be observed within a reasonable period of time and that knowledge has been passed on to us. If our knowledge base began at zero and we came across an acorn for the first time, we would find it extremely difficult to predict the long-term future of that acorn merely by cutting it up and examining it under a microscope.

So, does the universe have long-term, goal-oriented patterns that may be hidden among the short-term realities of contingency and randomness? A number of physicists began to speculate that this was the case in the late twentieth century, when their research indicated that the physical forces and constants of the universe can exist in only a very narrow range of possibilities in order for life to be possible, or even for the universe to exist. Change in even one of the forces or constants could make life impossible or cause the universe to self-destruct in a short period of time. In this view, the evolution of the universe and of life on earth has been subject to a great deal of randomness, but the cosmic structure and conditions that made evolution possible are not at all random. As the physicist Freeman Dyson has noted:

It is true that we emerged in the universe by chance, but the idea of chance is itself only a cover for our ignorance. . . . The more I examine the universe and study the details of its architecture, the more evidence I find that the universe in some sense must have known that we were coming. (Disturbing the Universe, p. 250)

In what way did the universe “know we were coming?” Consider the fact that in the early universe after the Big Bang, the only elements that existed were the “light” elements hydrogen and helium, along with trace amounts of lithium and beryllium. A universe with only four elements would certainly be simple, but there would not be much to build upon. Life, at least as we know it, requires not just hydrogen but at a minimum carbon, oxygen, nitrogen, phosphorus, and sulfur. How did these and other heavier elements come into being? Stars produced them, through the process of fusion. In fact, stars have been referred to as the “factories” of heavy elements. Human beings today consist primarily of oxygen, followed by carbon, hydrogen, nitrogen, calcium, and phosphorous. Additional elements compose less than one percent of the human body, but even most of these elements are essential to human life. Without the elements produced earlier by stars we would not be here. It has been aptly said that human beings are made of “stardust.”

So why did stars create the heavier elements? After all, the universe could have gotten along quite well without additional elements. Was it random chance that created the heavy elements? Not really. Random chance plays a role in many natural events, but the creation of heavy elements in stars requires some precise conditions — it is not just a churning jumble of subatomic particles. The astronomer Fred Hoyle was the first scientist to study how stars made heavy elements, and he noted that the creation of heavy elements required very specific values in order for the process to work. When he concluded his research Hoyle remarked, “A common sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature. The numbers one calculates from the facts seem to me so overwhelming as to put this conclusion almost beyond question.”

The creation of heavier elements by the stars does not necessarily mean that the universe intended specifically to create human beings, but it does seem to indicate that the universe somehow “knew” that heavy elements would be required to create higher forms of being, above and beyond the simple and primitive elements created by the Big Bang. In that sense, creating life is plausibly a long-term goal of the universe.

And what about life itself? Does it make sense to use teleology to study the behavior of life forms? Biologist Peter Corning has argued that while science has long pursued reductionist explanations of phenomena, it is impossible to really know biological systems without pursuing holistic explanations centered on the purposive behavior of organisms.

According to reductionism, all things can be explained by the parts that they are made of — human beings are made of tissues and organs, which are made of cells, which are made of chemical compounds, which are made of atoms, which are made of subatomic particles. In the view of many scientists, everything about human beings can in principle be explained by actions at the subatomic level. Peter Corning, however, argues that this conception is mistaken. Reductionism is necessary for partially explaining biological systems, but it is not sufficient. The reason for this is that the wholes are greater than the parts, and the behavior of wholes often has characteristics that are radically different from the parts that they are made of. For example, it would be dangerous to add pure hydrogen or oxygen to a fire, but when hydrogen atoms and oxygen atoms are combined in the right way — as H2O — one obtains a chemical compound that is quite useful for extinguishing fires. The characteristics of the molecule are different from the characteristics of the atoms in it. Likewise, at the subatomic level, particles may have no definite position in space and can even be said to exist in multiple places at once; but human beings only exist in one place at a time, despite the fact that human beings are made of subatomic particles. The behavior of the whole is different from the behavior of the parts. The transformation of properties that occurs when parts form new wholes is known as “emergence.”

Corning notes that when one incorporates analysis of wholes into theoretical explanation, there is goal-oriented “downward causation” as well as “upward causation.” For example, a bird seeks the goal of food and a favorable environment, so when it begins to get cold, that bird flies thousands of miles to a warmer location for the winter. The atoms that make up that bird obviously go along for the ride, but a scientist can’t use the properties of the atoms to predict the flight of these atoms; only by looking at the properties of the bird as a whole can a scientist predict what the atoms making up the bird are going to do. The bird as a whole doesn’t have complete control over the atoms composing its body, but it clearly has some control. Causation goes down as well as up. Likewise, neuropsychologist Roger Sperry has argued that human consciousness is a whole that influences the parts of the brain and body just as the parts of the brain and body influence the consciousness: “[W]e contend that conscious or mental phenomena are dynamic, emergent, pattern (or configurational) properties of the living brain in action . . . these emergent pattern properties in the brain have causal control potency. . . ” (“Mind, Brain, and Humanist Values,” Bulletin of the Atomic Scientists, Sept 1966) In Sperry’s view, the values created by the human mind influence human behavior as much as the atoms and chemicals in the human body and brain.

Science has traditionally viewed the evolution of the universe as upward causation only, with smaller parts joining into larger wholes as a result of the laws of nature and random chance. This view of causation is illustrated in the following diagram:

reductionism

But if we take seriously the notion of emergence and purposive action, we have a more complex picture, in which the laws of nature and random chance constrain purposive action and life forms, but do not entirely determine the actions of life forms — i.e., there is both upward and downward causation:

reductionism_and_holism

It is important to note that this new view of causation does not eliminate the laws of nature — it just sets limits on what the laws of nature can explain. Specifically, the laws of nature have their greatest predictive power when we are dealing with the simplest physical phenomena; the complex wholes that are formed by the evolutionary process are less predictable because they can to some extent work around the laws of nature by employing the new properties that emerge from the joining of parts. For example, it is relatively easy to predict the motion of objects in the solar system by using the laws of nature; it is not so easy to predict the motion of life forms because life forms have properties that go beyond the simple properties possessed by objects in the solar system. As Robert Pirsig notes in Lila, life can practically be defined by its ability to transcend or work around the static patterns of the laws of nature:

The law of gravity . . . is perhaps the most ruthlessly static pattern of order in the universe. So, correspondingly, there is no single living thing that does not thumb its nose at that law day in and day out. One could almost define life as the organized disobedience of the law of gravity. One could show that the degree to which an organism disobeys this law is a measure of its degree of evolution. Thus, while the single protozoa just barely get around on their cilia, earthworms manage to control their distance and direction, birds fly into the sky, and man goes all the way to the moon. (Lila (1991), p. 143.

Many scientists still resist the notion of teleological causation. But it could be argued that even scientists who vigorously deny that there is any purpose in the universe actually have an implicit teleology. Their teleology is simply the “laws of nature” themselves, and either the inner goal of all things is to follow those laws, or it is the goal of the laws to compel all things to follow their commands. Other implicit teleologies can be found in scientists’ assumptions that nature is inherently simple; that mathematics is the language of nature; or that all the particles and forces in the nature play some necessary role. According to physicist Paul Davies,

There is . . . an unstated but more or less universal feeling among physicists that everything that exists in nature must have a ‘place’ or a role as part of some wider scheme, that nature should not indulge in profligacy by manifesting gratuitous entities, that nature should not be arbitrary. Each facet of physical reality should link in with the others in a ‘natural’ and logical way. Thus, when the particle known as the muon was discovered in 1937, the physicist Isidor Rabi was astonished. ‘Who ordered that?’ he exclaimed. (Paul Davies, The Mind of God: The Scientific Basis for a Rational World, pp. 209-10.

Ultimately, however, one cannot fully discuss the goals or ends of the universe without exploring the notion of Ideal Forms — that is, a blueprint for all things to follow or aspire to. The subject of Ideal Forms will be discussed in my next post.