The Dynamic Quality of Henri Bergson

Robert Pirsig writes in Lila that Quality contains a dynamic good in addition to a static good. This dynamic good consists of a search for “betterness” that is unplanned and has no specific destination, but is nevertheless responsible for all progress. Once a dynamic good solidifies into a concept, practice, or tradition in a culture, it becomes a static good. Creativity, mysticism, dreams, and even good guesses or luck are examples of dynamic good in action. Religious traditions, laws, and science textbooks are examples of static goods.

Pirsig describes dynamic quality as the “pre-intellectual cutting edge of reality.” By this, he means that before concepts, logic, laws, and mathematical formulas are discovered, there is process of searching and grasping that has not yet settled into a pattern or solution. For example, invention and discovery is often not an outcome of calculation or logical deduction, but of a “free association of ideas” that tends to occur when one is not mentally concentrating at all. Many creative people, from writers to mathematicians, have noted that they came up with their best ideas while resting, engaging in everyday activities, or dreaming.

Dynamic quality is not just responsible for human creation — it is fundamental to all evolution, from the physical level of atoms and molecules, to the biological level of life forms, to the social level of human civilization, to the intellectual level of human thought. Dynamic quality exists everywhere, but it has no specific goals or plans — it always consists of spur-of-the-moment actions, decisions, and guesses about how to overcome obstacles to “betterness.”

It is difficult to conceive of dynamic quality — by its very nature, it is resistant to conceptualization and definition, because it has no stable form or structure. If it did have a stable form or structure, it would not be dynamic.

However the French philosopher Henri Bergson (1859-1941) provided a way to think about dynamic quality, by positing change as the fundamental nature of reality. (See Beyond the “Mechanism” Metaphor in Physics.) In Bergson’s view, traditional reason, science, and philosophy created static, eternal forms and posited these forms as the foundation of reality — but in fact these forms were tools for understanding reality and not reality itself. Reality always flowed and was impossible to fully capture in any static conceptual form. This flow could best be understood through perception rather than conception. Unfortunately, as philosophy created larger and larger conceptual categories, philosophy tended to become dominated by empty abstractions such as “substance,” “numbers,” and “ideas.” Bergson proposed that only an intuitive approach that enlarged perceptual knowledge through feeling and imagination could advance philosophy out of the dead end of static abstractions.

________________________

The Flow of Time

Bergson argued that we miss the flow of time when we use the traditional tools of science, mathematics, and philosophy. Science conceives of time as simply one coordinate in a deterministic space-time block ruled by eternal laws; mathematics conceives of time as consisting of equal segments on a graph; and philosophers since Plato have conceptualized the world as consisting of the passing shadows of eternal forms.

These may be useful conceptualizations, argues Bergson, but they do not truly grasp time. Whether it is an eternal law, a graph, or an eternal form, such depictions are snapshots of reality; they do not and cannot represent the indivisible flow of time that we experience. The laws of science in particular neglected the elements of indeterminism and freedom in the universe. (Henri Bergson once debated Einstein on this topic). The neglect of real change by science was the result of science’s ambition to foresee all things, which motivated scientists to focus on the repeatable and calculable elements of nature, rather than the genuinely new. (The Creative Mind, Mineola, New York: Dover, 2007, p. 3) Those events that could not be predicted were tossed aside as being merely random or unknowable. As for philosophy, Bergson complained that the eternal forms of the philosophers were empty abstractions — the categories of beauty and justice and truth were insufficient to serve as representations of real experience.

Actual reality, according to Bergson, consisted of “unceasing creation, the uninterrupted upsurge of novelty.” (The Creative Mind, p. 7) Time was not merely a coordinate for recording motion in a determinist universe; time was “a vehicle of creation and choice.” (p. 75) The reality of change could not be captured in static concepts, but could only be grasped intuitively. While scientists saw evolution as a combination of mechanism and random change, Bergson saw evolution as a result of a vital impulse (élan vital) that pervaded the universe. Although this vital impetus possessed an original unity, individual life forms used this vital impetus for their own ends, creating conflict between life forms. (Creative Evolution, pp. 50-51)

Biologists attacked Bergson on the grounds that there was no “vital impulse” that they could detect and measure. But biologists argued from the reductionist premise that everything could be explained by reference to smaller parts, and since there was no single detectable force animating life, there was no “vital impetus.” But Bergson’s premise was holistic, referring to the broader action of organic development from lower orders to higher orders, culminating in human beings. There was no separate force — rather entities organized, survived, and reproduced by absorbing and processing energy, in multiple forms. In the words of one eminent biologist, organisms are “resilient patterns . . . in an energy flow.” There is no separate or unique energy of life – just energy.

The Superiority of Perception over Conception

Bergson believed with William James that all knowledge originated in perception and feeling; as human mental powers increased, conceptual categories were created to organize and generalize what we (and others) discovered through our senses. Concepts were necessary to advance human knowledge, of course. But over time, abstract concepts came to dominate human thought to the point at which pure ideas were conceived as the ultimate reality — hence Platonism in philosophy, mathematical Platonism in mathematics, and eternal laws in science. Bergson believed that although we needed concepts, we also needed to rediscover the roots of concepts in perception and feeling:

If the senses and the consciousness had an unlimited scope, if in the double direction of matter and mind the faculty of perceiving was indefinite, one would not need to conceive any more than to reason. Conceiving is a make-shift when perception is not granted to us, and reasoning is done in order to fill up the gaps of perception or to extend its scope. I do not deny the utility of abstract and general ideas, — any more than I question the value of bank-notes. But just as the note is only a promise of gold, so a conception has value only through the eventual perceptions it represents. . . . the most ingeniously assembled conceptions and the most learnedly constructed reasonings collapse like a house of cards the moment the fact — a single fact rarely seen — collides with these conceptions and these reasonings. There is not a single metaphysician, moreover, not one theologian, who is not ready to affirm that a perfect being is one who knows all things intuitively without having to go through reasoning, abstraction and generalisation. (The Creative Mind, pp. 108-9)

In the end, despite their obvious utility, the conceptions of philosophy and science tend “to weaken our concrete vision of the universe.” (p. 111) But we clearly do not have God-like powers to perceive everything, and we are not likely to get such powers. So what do we do? Bergson argues that instead of “trying to rise above our perception of things” through concepts, we “plunge into [perception] for the purpose of deepening it and widening it.” (p. 111) But how exactly are we to do this?

Enlarging Perception

There is one group of people, argues Bergson, that have mastered the ability to deepen and widen perception: artists. From paintings to poetry to novels and musical compositions, artists are able to show us things and events that we do not directly perceive and evoke a mood within us that we can understand even if the particular form that the artist presents may never have been seen or heard by us before. Bergson writes that artists are idealists who are often absent-mindedly detached from “reality.” But it is precisely because artists are detached from everyday living that they are able to see things that ordinary, practical people do not:

[Our] perception . . . isolates that part of reality as a whole that interests us; it shows us less the things themselves than the use we can make of them. It classifies, it labels them beforehand; we scarcely look at the object, it is enough for us to know which category it belongs to. But now and then, by a lucky accident, men arise whose senses or whose consciousness are less adherent to life. Nature has forgotten to attach their faculty of perceiving to their faculty of acting. When they look at a thing, they see it for itself, and not for themselves. They do not perceive simply with a view to action; they perceive in order to perceive — for nothing, for the pleasure of doing so. In regard to a certain aspect of their nature, whether it be their consciousness or one of their senses, they are born detached; and according to whether this detachment is that of a particular sense, or of consciousness, they are painters or sculptors, musicians or poets. It is therefore a much more direct vision of reality that we find in the different arts; and it is because the artist is less intent on utilizing his perception that he perceives a greater number of things. (The Creative Mind, p. 114)

The Method of Intuition

Bergson argued that the indivisible flow of time and the holistic nature of reality required an intuitive approach, that is “the sympathy by which one is transported into the interior of an object in order to coincide with what there is unique and consequently inexpressible in it.” (The Creative Mind, p. 135) Analysis, as in the scientific disciplines, breaks down objects into elements, but this method of understanding is a translation, an insight that is less direct and holistic than intuition. The intuition comes first, and one can pass from intuition to analysis but not from analysis to intuition.

In his essay on the French philosopher Ravaisson, Bergson underscored the benefits and necessity of an intuitive approach:

[Ravaisson] distinguished two different ways of philosophizing. The first proceeds by analysis; it resolves things into their inert elements; from simplification to simplification it passes to what is most abstract and empty. Furthermore, it matters little whether this work of abstraction is effected by a physicist that we may call a mechanist or by a logician who professes to be an idealist: in either case it is materialism. The other method not only takes into account the elements but their order, their mutual agreement and their common direction. It no longer explains the living by the dead, but, seeing life everywhere, it defines the most elementary forms by their aspiration toward a higher form of life. It no longer brings the higher down to the lower, but on the contrary, the lower to the higher. It is, in the real sense of the word, spiritualism. (p. 202)

From Philosophy to Religion

A religious tendency is apparent in Bergson’s philosophical writings, and this tendency grew more pronounced as Bergson grew older. It is likely that Bergson saw religion as a form of perceptual knowledge of the Good, widened by imagination. Bergson’s final major work, The Two Sources of Morality and Religion (Notre Dame, IN: University of Notre Dame Press, 1977) was both a philosophical critique of religion and a religious critique of philosophy, while acknowledging the contributions of both forms of knowledge. Bergson drew a distinction between “static religion,” which he believed originated in social obligations to society, and “dynamic religion,” which he argued originated in mysticism and put humans “in the stream of the creative impetus.” (The Two Sources of Morality and Religion, p. 179)

Bergson was a harsh critic of the superstitions of “static religion,” which he called a “farrago of error and folly.” These superstitions were common in all cultures, and originated in human imagination, which created myths to explain natural events and human history. However, Bergson noted, static religion did play a role in unifying primitive societies and creating a common culture within which individuals would subordinate their interests to the common good of society. Static religion created and enforced social obligations, without which societies could not endure. Religion also provided comfort against the depressing reality of death. (The Two Source of Morality and Religion, pp. 102-22)

In addition, it would be a mistake, Bergson argued, to suppose that one could obtain dynamic religion without the foundation of static religion. Even the superstitions of static religion originated in the human perception of a beneficent virtue that became elaborated into myths. Perhaps thinking that a cool running spring or a warm fire on the hearth as the actions of spirits or gods were a case of imagination run rampant, but these were still real goods, as were the other goods provided by the pagan gods.

Dynamic religion originated in static religion, but also moved above and beyond it, with a small number of exceptional human beings who were able to reach the divine source: “In our eyes, the ultimate end of mysticism is the establishment of a contact . . . with the creative effort which life itself manifests. This effort is of God, if it is not God himself. The great mystic is to be conceived as an individual being, capable of transcending the limitations imposed on the species by its material nature, thus continuing and extending the divine action.” (pp. 220-21)

In Bergson’s view, mysticism is intuition turned inward, to the “roots of our being , and thus to the very principle of life in general.” (p. 250) Rational philosophy cannot fully capture the nature of mysticism, because the insights of mysticism cannot be captured in words or symbols, except perhaps in the word “love”:

God is love, and the object of love: herein lies the whole contribution of mysticism. About this twofold love the mystic will never have done talking. His description is interminable, because what he wants to describe is ineffable. But what he does state clearly is that divine love is not a thing of God: it is God Himself. (p. 252)

Even so, just as the dynamic religion bases its advanced moral insights in part on the social obligations of static religion, dynamic religion also must be propagated through the images and symbols supplied by the myths of static religion. (One can see this interplay of static and dynamic religion in Jesus and Gandhi, both of whom were rooted in their traditional religions, but offered original teachings and insights that went beyond their traditions.)

Toward the end of his life, Henri Bergson strongly considered converting to Catholicism (although the Church had already placed three of Bergson’s works on its Index of Prohibited Books). Bergson saw Catholicism as best representing his philosophical inclinations for knowing through perception and intuition, and for joining the vital impetus responsible for creation. However, Bergson was Jewish, and the anti-Semitism of 1930s and 1940s Europe made him reluctant to officially break with the Jewish people. When the Nazis conquered France in 1940 and the Vichy puppet government of France decided to persecute Jews, Bergson registered with the authorities as a Jew and accepted the persecutions of the Vichy regime with stoicism. Bergson died in 1941 at the age of 81.

Once among the most celebrated intellectuals in the world, today Bergson is largely forgotten. Even among French philosophers, Bergson is much less known than Descartes, Sartre, Comte, and Foucault. It is widely believed that Bergson lost his debate with Einstein in 1922 on the nature of time. (See Jimena Canales, The Physicist and the Philosopher: Einstein, Bergson, and the Debate that Changed Our Understanding of Time, p. 6) But it is recognized today even among physicists that while Einstein’s conception of spacetime in relativity theory is an excellent theory for predicting the motion of objects, it does not disprove the existence of time and real change. It is also true that Bergson’s writings are extraordinarily difficult to understand at times. One can go through pages of dense, complex text trying to understand what Bergson is saying, get suddenly hit with a colorful metaphor that seems to explain everything — and then have a dozen more questions about the meaning of the metaphor. Nevertheless, Bergson remains one of the very few philosophers who looked beyond eternal forms to the reality of a dynamic universe, a universe moved by a vital impetus always creating, always changing, never resting.

Beyond the “Mechanism” Metaphor in Physics

In previous posts, I discussed the use of the “mechanism” metaphor in science. I argued that this metaphor was useful historically in helping us to make progress in understanding cause-and-effect patterns in nature, but was limited or even deceptive in a number of important respects. In particular, the field of biology is characterized by evidence of spontaneity, adaptability, progress, and cooperative behavior among life forms that make the mechanism metaphor inadequate in characterizing and explaining life.

Physics is widely regarded as the pinnacle of the “hard sciences” and, as such, the field most suited to the mechanism metaphor. In fact, many physicists are so wedded to the idea of the universe as a mechanism, that they are inclined to speak as if the universe literally was a mechanism, that we humans are actually living inside a computer simulation. Why alien races would go through the trouble of creating simulated humans such as ourselves, with such dull, slow-moving lives, is never explained. But physicists are able to get away with these wild speculations because of their stupendous success in explaining and predicting the motion and actions of objects, from the smallest particles to the largest galaxies.

Fundamental to the success of physics is the idea that all objects are subject to laws that determine their behavior. Laws are what determine how the various parts of the universal mechanism move and interact. But when one starts asking questions about what precisely physical laws are and where they come from, one runs into questions and controversies that have never been successfully resolved.

Prior to the Big Bang theory, developed in the early twentieth century, the prevailing theory among physicists was that the universe existed eternally and had no beginning. When an accumulation of astronomical observations about the expansion of the universe led to the conclusion that the universe probably began from a single point that rapidly expanded outward, physicists gradually came to accept that the idea that the universe had a beginning, in a so-called “Big Bang.” However, this raised a problem: if laws ran the universe, and the universe had a beginning, then the laws must have preexisted the universe. In fact, the laws must have been eternal.

But what evidence is there for the notion that the laws of the universe are eternal? Does it really make sense to think of the law of gravity as existing before the universe existed, before gravity itself existed, before planets, stars, space, and time existed? Does it make sense to think of the law of conservation of mass existing before mass existed, or Mendel’s laws of genetics existing before genes existed? Where and how did they exist? If you take the logic of physics far enough, one is apt to conclude that the laws of physics are some kind of God(s), or that God is a mechanism.

Furthermore, what is the evidence for the notion that laws completely determine the motion of every particle in the universe, that the universe is deterministic? Observations and experiments under controlled conditions confirmed that the laws of Newtonian physics could indeed predict the motions of various objects. But did these observations and experiments prove that all objects everywhere behaved in completely predictable patterns?

Despite some fairly large holes in the ideas of eternal laws and determinism, both ideas have been popular among physicists and among many intellectuals. There have been dissenters, however.

The French philosopher Henri Bergson (1859-1941) argued that the universe was in fact a highly dynamic system with a large degree of freedom within it. According to Bergson, our ideas about eternal laws originated in human attempts to understand the reality of change by using fixed, static concepts. These concepts were useful tools — in fact, the tools had to be fixed and static in order to be useful. But the reality that these concepts pointed to was in fact flowing, all “things” were in flux, and we made a major mistake by equating our static concepts with reality and positing a world of eternal forms, whether that of Plato or the physicists. Actual reality, according to Bergson, was “unceasing creation, the uninterrupted up-surge of novelty.” (Henri Bergson, The Creative Mind, p. 7) Moreover, the flow of time was inherently continuous; we could try to measure time by chopping it into equal segments based on the ticking of a clock or by drawing a graph with units of time along one axis, but real time did not consist of segments any more than a flowing river consisted of segments. Time is a “vehicle of creation and choice” that refutes the idea of determinism. (p. 75)

Bergson did not dispute the experimental findings of physics, but argued that the laws of physics were insufficient to describe what the universe was really like. Physicists denied the reality of time and “unceasing creation,” according to Bergson, because scientists were searching for repeatable patterns, paying little or no attention to what was genuinely new:

[A]gainst this idea of the absolute originality and unforeseeability of forms our whole intellect rises in revolt. The essential function of our intellect, as the evolution of life has fashioned it, is to be a light for our conduct, to make ready for our action on things, to foresee, for a given situation, the events, favorable or unfavorable, which may follow thereupon. Intellect therefore instinctively selects in a given situation whatever is like something already known. . .  Science carries this faculty to the highest possible degree of exactitude and precision, but does not alter its essential character. Like ordinary knowledge, in dealing with things science is concerned only with the aspect of repetition. (Henri Bergson, Creative Evolution, p. 29)

Bergson acknowledged the existence of repetitive patterns in nature, but rather than seeing these patterns as reflecting eternal and wholly deterministic laws, Bergson proposed a different metaphor. Drawing upon the work of the French philosopher Felix Ravaisson, Bergson argued that nature develops “habits” of behavior in the same manner that human beings develop habits, from initial choices of behavior that over time become regular and subconscious: “Should we not then imagine nature, in this form, as an obscured consciousness and a dormant will? Habit thus gives us the living demonstration of this truth, that mechanism is not sufficient to itself: it is, so to speak, only the fossilized residue of a spiritual activity.” In Bergson’s view, spiritual activity was the ultimate foundation of reality, not the habits/mechanisms that resulted from it (The Creative Mind, pp. 197-98, 208).

Bergson’s views did not go over well with most scientists. In 1922, in Paris, Henri Bergson publicly debated Albert Einstein about the nature of time. (See Jimena Canales, The Physicist and the Philosopher: Einstein, Bergson, and the Debate that Changed Our Understanding of Time). Einstein’s theory of relativity posited that there was no absolute time that ticked at the same rate for every body in the universe. Time was linked to space in a single space-time continuum, the movement of bodies was entirely deterministic, and this movement could be predicted by calculating the space-time coordinates of these bodies. In Einstein’s view, there was no sharp distinction between past, present, and future — all events existed in a single block of space-time. This idea of a “block universe” is still predominant in physics today, though it is not without dissenters.

Most people have a “presentist” view of reality.

But physicists prefer the “block universe” view, in which all events are equally real.

Source: Time in Cosmology

 

In fact, when Einstein’s friend Michele Besso passed away in 1955, Einstein wrote a letter of condolence to Besso’s family in which he expressed his sympathies to the family but also declared that the separation between past, past, and future was an illusion anyway, so death did not mean anything. (The Physicist and the Philosopher, pp. 338-9)

It is widely believed that Bergson lost his 1922 debate with Einstein, in large part because Bergson did not fully understand Einstein’s theory of relativity. Nevertheless, while physicists everywhere eventually came to accept relativity, many rejected Einstein’s notion of a completely determinist universe which moved as predictably as a mechanism. The French physicist Louis de Broglie and the Japanese physicist Satosi Watanabe were proponents of Bergson and argued that the indeterminacy of subatomic particles supported Bergson’s view of the reality of freedom, the flow of time, and change. Einstein, on the other hand, never did accept the indeterminacy of quantum physics and insisted to his dying day that there must be “hidden” variables that would explain everything.  (The Physicist and the Philosopher, pp. 234-38)

 

_____________________________

 

Moving forward to the present day, the debate over the reality of time has been rekindled by Lee Smolin, a theoretical physicist at the Perimeter Institute for Theoretical Physics. In Time Reborn, Smolin proposes that time is indeed real and that the neglect of this fact has hindered progress in physics and cosmology. Contrary to what you may have been taught in your science classes, Smolin argues that the laws of nature are not eternal and precise but emergent and approximate. Borrowing the theory of evolution from biology, Smolin argues that the laws of the universe evolve over time, that genuine novelty is real, and that the laws are not precise iron laws but approximate, granting a degree of freedom to what was formerly considered a rigidly deterministic universe.

One major problem with physics, Smolin argues, is that scientists tend to generalize or extrapolate based on conclusions drawn from laboratory experiments conducted under highly controlled conditions, with extraneous variables carefully excluded — Smolin calls this “physics in a box.” Now there is nothing inherently wrong with “physics in a box” — carefully controlled experiments that exclude extraneous variables are absolutely essential to progress in scientific knowledge. The problem is that one cannot take a law derived from such a controlled experiment and simply scale it up to apply to the entire universe; Smolin calls this the “cosmological fallacy.” As Smolin argues, it makes no sense to simply scale up the findings from these controlled experiments, because the universe contains everything, including the extraneous variables! Controlled experiments are too restricted and artificial to serve as an adequate basis for a theory that includes everything. Instead of generalizing from the bottom up based on isolated subsystems of the universe, physicists must construct theories of the whole universe, from the top down. (Time Reborn, pp. 38-39, 97)

Smolin is not the first scientist to argue that the laws of nature may have evolved over time. Smolin points to the eminent physicists Paul Dirac, John Archibald Wheeler, and Richard Feynman as previous proponents of the idea that the laws may have evolved. (Time Reborn, pp. xxv-xxvi) But all of these theorists were preceded by the American philosopher and scientist Charles Sanders Peirce (1839-1914), who argued that “the only possible way of accounting for the laws of nature and for uniformity in general is to suppose them results of evolution.” (Time Reborn, p. xxv) Dr. Smolin gives credit to Charles Sanders Peirce for originating this idea, and proposes two ways in which the laws of nature have evolved.

The first way is through a series of “Big Bangs,” in which each new universe selects different laws each time. Smolin argues that there must have been an endless succession of Big Bangs in the past which have led to our current universe with its particular set of laws. (p. 120) Furthermore, Smolin proposes that black holes create new, baby universes, each with its own laws — so the black holes in our universe are the parents of other universes, and our own universe is the child of a black hole in some other universe! (pp. 123-25) Unfortunately, it seems impossible to adequately prove this theory, unless there is some possible way of observing these other universes with their different laws.

Smolin also proposes that laws can arise at the quantum level based on what he calls the “principle of precedence.” Smolin makes an analogy to Anglo-Saxon law, in which the decisions of judges in the past serve as precedents for decisions made today and in the future, in an ever-growing body of “common law.” The idea is that everything in the universe has a tendency to develop habits; when a truly novel event occurs, and then occurs again, and again, it settles into a pattern of repetition; that settled pattern of repetition indicates the development of a new law of nature. The law did not previously exist eternally — it emerged out of habit. (Time Reborn, pp. 146-53) Furthermore, rather than being bound by deterministic laws, the universe remains genuinely open and free, able to build new forms on top of existing forms. Smolin argues, “In the time-bound picture I propose, the universe is a process for breeding novel phenomena and states of organization, which will forever renew itself as it evolves to states of ever higher complexity and organization. The observational record tells us unambiguously that the universe is getting more interesting as time goes on.” (p. 194)

And yet, despite his openness to the idea of genuine novelty in the evolution of the universe, even Smolin is unable to get away from the idea of mechanisms being ultimately responsible for everything. Smolin writes that the universe began with a particular set of initial conditions and then asks “What mechanism selected the actual initial conditions out of the infinite set of possibilities?” (pp. 97-98) He does not consider the possibility that in the beginning, perhaps there was no mechanism. Indeed, this is the problem with any cosmology that aims to provide a total explanation for existence; as one goes back in time searching for origins, one eventually reaches a first cause that has no prior cause, and thus no causal explanation. One either has to posit a creator-God, an eternal self-sufficient mechanism, or throw up one’s hands and accept that we are faced with an unsolvable mystery.

In fact, Smolin is not as radical as his inspiration, Charles Sanders Peirce. According to Peirce, the universe did not start out with a mechanism but rather began from a condition of maximum freedom and spontaneity, only gradually adopting certain “habits” which evolved into laws. Furthermore, even after the development of laws, the universe retained a great deal of chance and spontaneity. Laws specified certain regularities, but even within these regularities, a great deal of freedom still existed. For example, life forms may have been bound to the surface of the earth and subject to the regular rotation of the earth, the orbit of the earth around the sun, and the limitations of biology, but nonetheless life forms still retained considerable freedom.

Peirce, who believed in God, held that the universe was pervaded not by mechanism but mind, which was by definition characterized by freedom and spontaneity. As the mind/universe developed certain habits, these habits congealed into laws and solid matter. In Peirce’s view, “matter . . . [is] mere specialised and partially deadened mind.” (“The Law of Mind,” The Monist, vol. 11, no. 4, July 1892) This view is somewhat similar to the view of the physicist Werner Heisenberg, who noted that “Energy is in fact the substance from which all elementary particles, all atoms and therefore all things are made. . . .”

One contemporary philosopher, Philip Goff of Durham University, following Peirce and other thinkers, has argued that consciousness is not restricted to humans but in fact pervades the universe, from the smallest subatomic particles to the most intelligent human beings. This theory is known as panpsychism. (see Goff’s book Galileo’s Error: Foundations for a New Science of Consciousness) Goff does not argue that atoms, rocks, water, stars, etc. are like humans in their thought process, but that they have experiences, albeit very primitive and simple experiences compared to humans. The difference between the experiences of a human and the experiences of an electron is vast, but the difference still exists on a spectrum; there is no sharp dividing line that dictates that experience ends when one gets down to the level of insects, cells, viruses, molecules, atoms, or subatomic particles. In Dr. Goff’s words:

Human beings have a very rich and complex experience; horses less so; mice less so again. As we move to simpler and simpler forms of life, we find simpler and simpler forms of experience. Perhaps, at some point, the light switches off, and consciousness disappears. But it’s at least coherent to suppose that this continuum of consciousness fading while never quite turning off carries on into inorganic matter, with fundamental particles having almost unimaginably simple forms of experience to reflect their incredibly simple nature. That’s what panpsychists believe. . . .

The starting point of the panpsychist is that physical science doesn’t actually tell us what matter is. . . . Physics tells us absolutely nothing about what philosophers like to call the intrinsic nature of matter: what matter is, in and of itself. So it turns out that there is a huge hole in our scientific story. The proposal of the panpsychist is to put consciousness in that hole. Consciousness, for the panpsychist, is the intrinsic nature of matter. There’s just matter, on this view, nothing supernatural or spiritual. But matter can be described from two perspectives. Physical science describes matter “from the outside,” in terms of its behavior. But matter “from the inside”—i.e., in terms of its intrinsic nature—is constituted of forms of consciousness.

Unfortunately, there is, at present, no proof that the universe is pervaded by mind, nor is there solid evidence that the laws of physics have evolved. We do know that the science of physics is no longer as deterministic as it used to be. The behavior of subatomic particles is not fully predictable, despite the best efforts of physicists for nearly a century, and many physicists now acknowledge this. We also know that the concepts of laws and determinism often fail in the field of biology — there are very few actual laws in biology, and the idea that these laws preexisted life itself seems incoherent. No biologist will tell you that human beings in their present state are the inevitable product of determinist evolution and that if we started the planet Earth all over again, we would end up in 4.5 billion years with exactly the same types of life forms, including humans, that we have now. Nor can biologists predict the movement of life forms the same way that physicists can predict the movement of planets. Life forms do their own thing. Human beings retain their free will and moral responsibility. Still, the notion that the laws of physics are pre-existent and eternal appears to have no solid ground either; it is merely one of those assumptions that has become widely accepted because few have sought to challenge it or even ask for evidence.

Beyond the “Mechanism” Metaphor in Biology

In a previous post, I discussed the frequent use of the “mechanism” metaphor in the sciences. I argued that while this metaphor was useful in spurring research into cause-and-effect patterns in physical and biological entities, it was inadequate as a descriptive model for what the universe and life is like. In particular, the “mechanism” metaphor is unable to capture the reality of change, the evidence of self-driven progress, and the autonomy and freedom of life forms.

I don’t think it’s possible to abandon metaphors altogether in science, including the mechanism metaphor. But I do think that if we are to more fully understand the nature of life, in all its forms, we must supplement the mechanism metaphor with other, additional conceptualizations and metaphors that illustrate dynamic processes.

______________________________

 

David Bohm (1917-1992), one of the most prominent physicists of the 20th century, once remarked upon a puzzling development in the sciences: While 19th century classical physics operated according to the view that the universe was a mechanism, research into quantum physics in the 20th century demonstrated that the behavior of particles at the subatomic level was not nearly as deterministic as the behavior of larger objects, but rather was probabilistic. Nevertheless, while physicists adjusted to this new reality, the science of biology was increasingly adopting the metaphor of mechanism to study life. Remarked Bohm:

 It does seem odd . . . that just when physics is thus moving away from mechanism, biology and psychology are moving closer to it. If this trend continues, it may well be that scientists will be regarding  living and intelligent beings as mechanical, while they suppose that inanimate matter is too complex and subtle to fit into the limited categories of mechanism. But of course, in the long run, such a point of view cannot stand up to critical analysis. For since DNA and other molecules studied by the biologist are constituted of electrons, protons, neutrons, etc., it follows that they too are capable of behaving in a far more complex and subtle way than can be described in terms of mechanical concepts. (Source: David Bohm, “Some Remarks on the Notion of Order,” in Towards a Theoretical Biology, Vol. 2: Sketches, ed. C.H. Waddington, Chicago: Aldine Publishing, p. 34.)

According to Bohm, biology had to overcome, or at least supplement, the mechanism metaphor if it was to advance. It was not enough to state that anything outside mechanical processes was “random,” for the concept of randomness was too ill-defined to constitute an adequate description of phenomena that did not fit into the mechanism metaphor. For one thing, noted Bohm, the word “random” was often used to denote “disorder,” when in fact it was impossible for a phenomenon to have no order whatsoever. Nor did unpredictability imply randomness — Bohm pointed out that the notes of a musical composition are not predictable, but nonetheless have a precise order when considered in totality. (Ibid., p. 20)

Bohm’s alternative conceptualization was that of an open order, that is, an order that consisted of multiple potential sub-orders or outcomes. For example, if you roll a single die once, there are six possible outcomes and each outcome is equally likely. But the die is not disordered; in fact, it is a precisely ordered system, with equal length dimensions on all sides of the cube and a weight equally distributed throughout the cube. (This issue is discussed in How Random is Evolution?) However, unlike the roll of a die, life is both open to new possibilities and capable of retaining previous outcomes, resulting in increasingly complex orders, orders that are nonetheless still open to change.

Although we are inclined to think of reality as composed of “things,” Bohm argued that the fundamental reality of the universe was not “things” but change: “All is process. That is to say, there is no thing in the universe. Things, objects, entities, are abstractions of what is relatively constant from a process of movement and transformation. They are like the shapes that children like to see in the clouds . . . .” (“Further Remarks on Order,” Ibid., p. 42) The British biologist C.H. Waddington, commenting on Bohm, proposed another metaphor, borrowed from the ancient Judeo-Christian sectarian movement known as Gnosticism:

‘Things’ are essentially eggs — pregnant with God-knows-what. You look at them and they appear simple enough, with a bland definite shape, rather impenetrable. You glance away for a bit and when you look back what you find is that they have turned into a fluffy yellow chick, actively running about and all set to get imprinted on you if you will give it half a chance. Unsettling, even perhaps a bit sinister. But one strand of Gnostic thought asserted that _everything_ is like that. (C.H. Waddington, “The Practical Consequences of Metaphysical Beliefs on a Biologist’s Work,” Ibid., p. 73)

Bohm adds that although the mechanism metaphor is apt to make one think of nature as an engineer or the work of an engineer (i.e., the universe as a “clock”), it could be more useful to think of nature as an artist. Bohm compares nature to a young child beginning to draw. Such a child attempting to draw a rectangle for the first time is apt to end up with a drawing that resembles random or nearly-random lines. Over time however, the child gathers visual impressions and instructions from parents, teachers, books, and toys of what shapes are and what a rectangle is; over time, with growth and practice, the child learns to draw a reasonably good rectangle. (Bohm, “Further Remarks on Order, Ibid., pp. 48-50) It is an order that appears to be the outcome of randomness, but in fact emerges from an open order of multiple possibilities.

 

The American microbiologist Carl. W. Woese (1928-2012), who achieved honors and awards for his discovery of a third domain of life, the “archaea,” also rejected the use of mechanist perspectives in biology. In an article calling for a “new biology,” Woese argued that biology borrowed too much from physics, focusing on the smallest parts of nature while lacking a holistic perspective:

Let’s stop looking at the organism purely as a molecular machine. The machine metaphor certainly provides insights, but these come at the price of overlooking much of what biology is. Machines are not made of parts that continually turn over, renew. The organism is. Machines are stable and accurate because they are designed and built to be so. The stability of an organism lies in resilience, the homeostatic capacity to reestablish itself. While a machine is a mere collection of parts, some sort of “sense of the whole” inheres in the organism, a quality that becomes particularly apparent in phenomena such as regeneration in amphibians and certain invertebrates and in the homeorhesis exhibited by developing embryos.

If they are not machines, then what are organisms? A metaphor far more to my liking is this. Imagine a child playing in a woodland stream, poking a stick into an eddy in the flowing current, thereby disrupting it. But the eddy quickly reforms. The child disperses it again. Again it reforms, and the fascinating game goes on. There you have it! Organisms are resilient patterns in a turbulent flow—patterns in an energy flow. A simple flow metaphor, of course, fails to capture much of what the organism is. None of our representations of organism capture it in its entirety. But the flow metaphor does begin to show us the organism’s (and biology’s) essence. And it is becoming increasingly clear that to understand living systems in any deep sense, we must come to see them not materialistically, as machines, but as (stable) complex, dynamic organization. (“A New Biology for a New Century,” Microbiology and Molecular Biology Reviews, June 2004, pp. 175-6)

A swirling pattern of water is perhaps not entirely satisfactory as a metaphoric conceptualization of life, but it does point to an aspect of reality that the mechanism metaphor does not satisfactorily capture: the ability of life to adapt.

Woese proposes another metaphor to describe what life was like in the very early stages of evolution, when primitive single-celled organisms were all that existed: a community. In this stage, cellular organization was minimal, and many important functions evolved separately and imperfectly in different cellular organisms. However, these organisms could evolve by exchanging genes, in a process called Horizontal Gene Transfer (HGT). This was the primary factor in very early evolution, not random mutation. According to Woese:

The world of primitive cells feels like a vast sea, or field, of cosmopolitan genes flowing into and out of the evolving cellular (and other) entities. Because of the high level of HGT [horizontal gene transfer], evolution at this stage would in essence be communal, not individual. The community of primitive evolving biological entities as a whole as well as the surrounding field of cosmopolitan genes participates in a collective reticulate [i.e., networked] evolution. (Ibid., p. 182)

It was only later that this loose community of cells increased their interactions to the point at which a phase transition took place, in which evolution became less communal and the vertical inheritance of relatively well-developed organisms became the main form of evolutionary descent. But horizontal gene transfer still continued after this transition, and continues to this day. (Ibid., pp. 182-84) It’s hard to see how these interactions resemble any kind of mechanism.

Tree of life showing vertical and horizontal gene transfers.

Source:  Horizontal gene transfer – Wikipedia

 

_____________________________

So let’s return to the question of “vitalism,” the old theory that there was something special responsible for life: a soul, spirit, force, or substance. The old theories of vitalism have been abandoned on the grounds that no one has been able to observe, identify, or measure a soul, spirit, etc. However, the dissatisfaction of many biologists with the “mechanist” outlook has led to a new conception of vitalism, one in which the essence of life is not in a mysterious substance or force but in the organization of matter and energy, and the processes that occur under this organization. (See Sebastian Normandin and Charles T. Wolfe, eds., Vitalism and the Scientific Image in Post-Enlightenment Life Science, 1800-2010, p. 2n4, 69, 277, 294 )

As Woese wrote, organisms are “resilient patterns . . . in an energy flow.” In a previous essay, I pointed to the work of the great physicist Werner Heisenberg, who noted that matter and energy are essentially interchangeable and that the universe itself began as a great burst of energy, much of which gradually evolved into different forms of matter over time. According to Heisenberg, “Energy is in fact the substance from which all elementary particles, all atoms and therefore all things are made. . . .” (Physics and Philosophy, p. 63)

Now energy itself is not a personal being, and while energy can move things, it’s problematic to equate any moving matter as a kind of life. But is it not the case that once a particular configuration of energy/matter rises to a certain level, organized under a unified consciousness with a free will, then that configuration of energy/matter constitutes a spirit or soul? In this view, there is no vitalist “substance” that gives life to matter — it is simply a matter of energy/matter reaching a certain level of organization capable of (at least minimal) consciousness and free will.

In this view, when ancient peoples thought that breath was the spirit of life and blood was the sacred source of life, they were not that far off the mark. Oxygen is needed by (most) life forms to process the energy in food. Without the continual flow of oxygen from our environment into our body, we die. (Indeed, brain damage will occur after only three minutes without oxygen.) And blood delivers the oxygen and nutrients to the cells that compose our body. Both breath and blood maintain the flow of energy that is essential to life. It’s all a matter of organized energy/matter, with billions of smaller actors and activities working together to form a unified conscious being.

The Metaphor of “Mechanism” in Science

The writings of science make frequent use of the metaphor of “mechanism.” The universe is conceived as a mechanism, life is a mechanism, and even human consciousness has been described as a type of mechanism. If a phenomenon is not an outcome of a mechanism, then it is random. Nearly everything science says about the universe and life falls into the two categories of mechanism and random chance.

The use of the mechanism metaphor is something most of us hardly ever notice. Science, allegedly, is all about literal truth and precise descriptions. Metaphors are for poetry and literature. But in fact mathematics and science use metaphors. Our understandings of quantity, space, and time are based on metaphors derived from our bodily experiences, as George Lakoff and Rafael Nunez have pointed out in their book Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being  Theodore L. Brown, a professor emeritus of chemistry at the University of Illinois at Urbana-Champaign, has provided numerous examples of scientific metaphors in his book, Making Truth: Metaphor in Science. Among these are the “billiard ball” and “plum pudding” models of the atom, as well as the “energy landscape” of protein folding. Scientists envision cells as “factories” that accept inputs and produce goods. The genetic structure of DNA is described as having a “code” or “language.” The term “chaperone proteins” was invented to describe proteins that have the job of assisting other proteins to fold correctly.

What I wish to do in this essay is closely examine the use of the mechanism metaphor in science. I will argue that this metaphor has been extremely useful in advancing our knowledge of the natural world, but its overuse as a descriptive and predictive model has led us down the wrong path to fully understanding reality — in particular, understanding the actual nature of life.

____________________________

Thousands of years ago, human beings attributed the actions of natural phenomena to spirits or gods. A particular river or spring or even tree could have its own spirit or minor god. Many humans also believed that they themselves possessed a spirit or soul which occupied the body, gave the body life and motion and intelligence, and then departed when the body died. According to the Bible, Genesis 2:7, when God created Adam from the dust of the ground, God “breathed into his nostrils the breath of life; and man became a living soul.” Knowing very little of biology and human anatomy, early humans were inclined to think that spirit/breath gave life to material bodies; and when human bodies no longer breathed, they were dead, so presumably the “spirit” went someplace else. The ancient Hebrews also saw a role for blood in giving life, which is why they regarded blood as sacred. Thus, the Hebrews placed many restrictions on the consumption and handling of blood when they slaughtered animals for sacrifice and food. These views about the spiritual aspects of breath and blood are also the historical basis of “vitalism,” the theory that life consists of more than material parts, and must somehow be based on a vital principle, spark, or force, in addition to matter. 

The problem with the vitalist outlook is that it did not appreciably advance our knowledge of nature and the human body.  The idea of a vital principle or force was too vague and could not be tested or measured or even observed. Of course, humans did not have microscopes thousands of years ago, so we could not see cells and bacteria, much less atoms.

By the 17th century, thinkers such as Thomas Hobbes and Rene Descartes proposed that the universe and even life forms were types of mechanisms, consisting of many parts that interacted in such a way as to result in predictable patterns. The universe was often analogized to a clock. (The first mechanical clock was developed around 1300 A.D., but water clocks, based on the regulated flow of water, have been in use for thousands of years.) The great French scientist Pierre-Simon Laplace was an enthusiast for the mechanist viewpoint and even argued that the universe could be regarded as completely determined from its beginnings:

We may regard the present state of the universe as the effect of the past and the cause of the future. An intellect which at any given moment knew all of the forces that animate nature and the mutual positions of the beings that compose it, if this intellect were vast enough to submit the data to analysis, could condense into a single formula the movement of the greatest bodies of the universe and that of the lightest atom; for such an intellect nothing could be uncertain and the future just like the past would be present before its eyes. (A Philosophical Essay on Probabilities, Chapter Two)

Laplace’s radical determinism was not embraced by all scientists, but it was a common view among many scientists. Later, as the science of biology developed, it was argued that the evolution of life was not as determined as the motion of the planets. Rather, random genetic mutations resulted in new life forms and “natural selection” determined that fit life forms flourished and reproduced, while unfit forms died out. In this view, physical mechanisms combined with random chance explained evolution.

The astounding advances in physics and biology in the past centuries certainly seem to justify the mechanism metaphor. Reality does seem to consist of various parts that interact in predictable cause-and-effect patterns. We can predict the motions of objects in space, and build technologies that send objects in the right direction and speed to the right target. We can also methodically trace illnesses to a dysfunction in one or more parts of the body, and this dysfunction can often be treated by medicine or surgery.

But have we been overusing the mechanism metaphor? Does reality consist of nothing but determined and predictable cause-and-effect patterns with an element of random chance mixed in?

I believe that we can shed some light on this subject by first examining what mechanisms are — literally — and then examine what resemblances and differences there are between mechanisms and the actual universe, between mechanisms and actual life.

____________________

 

Even in ancient times, human beings created mechanisms, from clocks to catapults to cranes to odometers. The Antikythera mechanism of ancient Greece, constructed around 100 B.C., was a sophisticated mechanism with over 30 gears that was able to predict astronomical motions and is considered to be one of the earliest computers. Below is a photo of a fragment of the mechanism, discovered in an ocean shipwreck in 1901:

 

Over subsequent centuries, human civilization created steam engines, propeller-driven ships, automobiles, airplanes, digital watches, computers, robots, nuclear reactors, and spaceships.

So what do most or all of these mechanisms have in common?

  1. Regularity and Predictability. Mechanisms have to be reliable. They have to do exactly what you want every time. Clocks can’t run fast, then run slow; automobiles can’t unilaterally change direction or speed; nuclear reactors can’t overheat on a whim; computers have to give the right answer every time. 
  2. Precision. The parts that make up a mechanism must fit together and move together in precise ways, or breakdown, or even disaster, will result. Engineering tolerances are typically measured in millimeters.
  3. Stability and Durability. Mechanisms are often made of metal, and for good reason. Metal can endure extreme forces and temperatures, and, if properly maintained, can last for many decades. Metal can slightly expand and contract depending on temperature, and metals can have some flexibility when needed, but metallic constructions are mostly stable in shape and size. 
  4. Unfree/Determined. Mechanisms are built by humans for human purposes. When you manage the controls of a mechanism correctly, the results are predictable. If you get into your car and decide to drive north, you will drive north. The car will not dispute you or override your commands, unless it is programmed to override your commands, in which case it is simply following a different set of instructions. The car has no will of its own. Human beings would not build mechanisms if such mechanisms acted according to their own wills. The idea of a self-willing mechanism is prolific in science fiction, but not in science.
  5. They do not grow. Mechanisms do not become larger over time or change their basic structure like living organisms. This would be contrary to the principle of durability/stability. Mechanisms are made for a purpose, and if there is a new purpose, a new mechanism will be made.
  6. They do not reproduce. Mechanisms do not have the power of reproduction. If you put a mechanism into a resource-rich environment, it will not consume energy and materials and give birth to new mechanisms. Only life has this power. (A partial exception can be made in the case of  computer “viruses,” which are lines of code programmed to duplicate themselves, but the “viruses” are not autonomous — they do the bidding of the programmer.)
  7. Random events lead to the universal degradation of mechanisms, not improvement. According to neo-Darwinism, random mutations in the genes of organisms are what is responsible for evolution; in most cases, mutations are harmful, but in some cases, they lead to improvement, leading to new and more complex organisms, ultimately culminating in human beings. So what kind of random mutations (changes) lead to improved mechanisms? None, really. Mechanisms change over time with random events, but these events lead to degradation of mechanisms, not improvement. Rust sets in, different parts break, electric connections fail, lubricating fluids leak. If you leave a set of carefully-preserved World War One biplanes out in a field, without human intervention, they will not eventually evolve into jet planes and rocket ships. They will just break down. Likewise, electric toasters will not evolve into supercomputers, no matter how many millions of years you wait. Of course, organisms also degrade and die, but they have the power of reproduction, which continues the population and creates opportunities for improvement.

There is one hypothetical mechanism that, if constructed, could mimic actual organisms: a self-replicating machine. Such a machine could conceivably contain plans within itself to gather materials and energy from its environment and use these materials and energy to construct copies of itself, growing exponentially in numbers as more and more machines reproduce themselves. Such machines could even be programmed to “mutate,” creating variations in its descendants. However, no such mechanism has yet been produced. Meanwhile, primitive single-celled life forms on earth have been successfully reproducing for four billion years.

Now, let’s compare mechanisms to life forms. What are the characteristics of life?

  1. Adaptability/Flexibility. The story of life on earth is a story of adaptability and flexibility. The earliest life forms, single cells, apparently arose in hydrothermal vents deep in the ocean. Later, some of these early forms evolved into multi-cellular creatures, which spread throughout the oceans. After 3.5 billion years, fish emerged, and then much later, the first land creatures. Over time, life adapted to different environments: sea, land, rivers, caves, air; and also to different climates, from the steamiest jungles to frozen environments. 
  2. Creativity/Diversification. Life is not only adaptive, it is highly creative and branches into the most diverse forms over time. Today, there are millions of species. Even in the deepest parts of the ocean, life forms thrive in an environment with pressures that would crush most life forms. There are bacteria that can live in water at or near the boiling point. The tardigrade can survive the cold, hostile vacuum of space. The bacteria Deinococcus radiodurans is able to survive extreme forms of radiation by means of one of the most efficient DNA repair capabilities ever seen. Now it’s true that among actual mechanisms there is also a great variety; but these mechanisms are not self-created, they are created by humans and retain their forms unless specifically modified by humans.
  3. Drives toward cooperation / symbiosis. Traditional Darwinist views of evolution see life as competition and “survival of the fittest.” However, more recent theorists of evolution point to the strong role of cooperation in the emergence and survival of advanced life forms. Biologist Lynn Margulis has argued that the most fundamental building block of advanced organisms, the cell, was the result of a merger between more primitive bacteria billions of years ago. By merging, each bacterium lent a particular biological advantage to the other, and created a more advanced life form. This theory was regarded with much skepticism at the time it was proposed, but over time it became widely accepted.  Today, only about half of the human body is made up of human cells — the other half consists of trillions of microbes and quadrillions of viruses that largely live in harmony with human cells. Contrary to the popular view that microbes and viruses are threats to human beings, most of these microbes and viruses are harmless or even beneficial to humans. Microbes are essential in digesting food and synthesizing vitamins, and even the human immune system is partly built and partly operated by microbes!  By contrast, the parts of a mechanism don’t naturally come together to form the mechanism; they are forced together by their manufacturer.
  4. Growth. Life is characterized by growth. All life forms begin with either a single cell, or the merger of two cells, after which a process of repeated division begins. In multicellular organisms, the initial cell eventually becomes an embryo; and when that embryo is born, becoming an independent life form, it continues to grow. In some species, that life form develops into an animal that can weigh hundreds or even thousands of pounds. This, from a microscopic cell! No existing mechanism is capable of that kind of growth.
  5. Reproduction. Mechanisms eventually disintegrate, and life forms die. But life forms have the capability of reproducing and making copies of themselves, carrying on the line. In an environment with adequate natural resources, the number of life forms can grow exponentially. Mechanisms have not mastered that trick.
  6. Free will/choice. Mechanisms are either under direct human control, are programmed to do certain things, or perform in a regular pattern, such as a clock. Life forms, in their natural settings, are free and have their own purposes. There are some regular patterns — sleep cycles, mating seasons, winter migration. But the day-to-day movements and activities of life forms are largely unpredictable. They make spur-of-the-moment decisions on where to search for food, where to find shelter, whether to fight or flee from predators, and which mate is most acceptable. In fact, the issue of mate choice is one of the most intriguing illustrations of free will in life forms — there is evidence that species may select mates for beauty over actual fitness, and human egg cells even play a role in selecting which sperm cells will be allowed to penetrate them.
  7. Able to gather energy from its environment. Mechanisms require energy to work, and they acquire such energy from wound springs or weights (in clocks), electrical outlets, batteries, or fuel. These sources of energy are provided by humans in one way or another. But life forms are forced to acquire energy on their own, and even the most primitive life forms mastered this feat billions of years ago. Plants get their energy from the sun, and animals get their energy from plants or other animals. It’s true that some mechanisms, such as space probes, can operate on their own for many years while drawing energy from solar panels. But these panels were invented and produced by humans, not by mechanisms.
  8. Self-organizing. Mechanisms are built, but life forms are self-organizing. Small components join other small components, forming a larger organization; this larger organization gathers together more components. There is a gradual growth and differentiation of functions — digestion, breathing, brain and nervous system, mobility, immune function. Now this process is very, very slow: evolution takes place over hundreds of millions of years. But mechanisms are not capable of self-organization. 
  9. Capacity for healing and self-repair. When mechanisms are broken, or not working at full potential, a human being intervenes to fix the mechanism. When organisms are injured or infected, they can self-repair by initiating multiple processes, either simultaneously or in stages: immune cells fight invaders; blood cells clot in open wounds to stop bleeding; dead tissues and cells are removed by other cells; and growth hormones are released to begin the process of building new tissue. As healing nears completion, cells originally sent to repair the wound are removed or modified. Now self-repair is not always adequate, and organisms die all the time from injury or infection. But they would die much sooner, and probably a species would not persist at all, without the means of self-repair. Even the existing medications and surgery that modern science has developed largely work with and supplement the body’s healing capacities — after all, surgery would be unlikely to work in most cases without the body’s means of self-repair after the surgeon completes cutting and sewing.

______________________

 

The mechanism metaphor served a very useful purpose in the history of science, by spurring humanity to uncover the cause-and-effect patterns responsible for the motions of stars and planets and the biological functions of life. We can now send spacecraft to planets; we can create new chemicals to improve our lives; we now know that illness is the result of a breakdown in the relationship between the parts of a living organism; and we are getting better and better in figuring out which human parts need medication or repair, so that lifespans and general health can be extended.

But if we are seeking the broadest possible understanding of what life is, and not just the biological functions of life, we must abandon the mechanism metaphor as inadequate and even deceptive. I believe the mechanism metaphor misses several major characteristics of life:

  1. Change. Whether it is growth, reproduction, adaptation, diversification, or self-repair, life is characterized by change, by plasticity, flexibility, and malleability. 
  2. Self-Driven Progress. There is clearly an overall improvement in life forms over time. Changes in species may take place over millions or billions of years, but even so, the differences between a single-celled animal and contemporary multicellular creatures are astonishingly large. It is not just a question of “complexity,” but of capability. Mammals, reptiles, and birds have senses, mobility, and intelligence that single-celled creatures do not have.
  3. Autonomy and freedom. Although some scientists are inclined to think of living creatures, including humans, as “gene machines,” life forms can’t be easily analogized to pre-programmed machines. Certainly, life forms have goals that they pursue — but the pursuit of these goals in an often hostile environment requires numerous spur-of-the-moment decisions that do not lead to the predictable outcomes we expect of mechanisms.

Robert Pirsig, author of Zen and the Art of Motorcycle Maintenance, argues in Lila that the fundamental nature of life is its ability to move away from mechanistic patterns, and science has overlooked this fact because scientists consider it their job to look for mechanisms:

Mechanisms are the enemy of life. The more static and unyielding the mechanisms are, the more life works to evade them or overcome them. The law of gravity, for example, is perhaps the most ruthlessly static pattern of order in the universe. So, correspondingly, there is no single living thing that does not thumb its nose at that law day in and day out. One could almost define life as the organized disobedience of the law of gravity. One could show that the degree to which an organism disobeys this law is a measure of its degree of evolution. Thus, while the simple protozoa just barely get around on their cilia, earthworms manage to control their distance and direction, birds fly into the sky, and man goes all the way to the moon. . . .  This would explain why patterns of life [in evolution] do not change solely in accord with causative ‘mechanisms’ or ‘programs’ or blind operations of physical laws. They do not just change valuelessly. They change in ways that evade, override and circumvent these laws. The patterns of life are constantly evolving in response to something ‘better’ than that which these laws have to offer. (Lila, 1991 hardcover edition, p. 143)

But if the “mechanism” metaphor is inadequate, what are some alternative conceptualizations and metaphors that can retain the previous advances of science while deepening our understanding and helping us make new discoveries? I will discuss this issue in the next post.

Next: Beyond the “Mechanism” Metaphor in Biology

 

Knowledge without Reason

Is it possible to gain real and valuable knowledge without using reason? Many would scoff at this notion. If an idea can’t be defended on rational grounds, it is either a personal preference that may not be held by others or it is false and irrational. Even if one acknowledges a role for intuition in human knowledge, how can one trust another person’s intuition if that person does not provide reasons for his or her beliefs?

In order to address this issue, let’s first define “reason.” The Encyclopedia Britannica defines reason as “the faculty or process of drawing logical inferences,” that is, the act of developing conclusions through logic. Britannica adds, “Reason is in opposition to sensation, perception, feeling, desire, as the faculty . . .  by which fundamental truths are intuitively apprehended.” The New World Encyclopedia defines reason as “the ability to form and operate upon concepts in abstraction, in accordance with rationality and logic. ” Wikipedia states: “Reason is the capacity of consciously making sense of things, applying logic, and adapting or justifying practices, institutions, and beliefs based on new or existing information.”

Fundamental to all these definitions is the idea that knowledge must be based on explicit concepts and statements, in the form of words, symbols, or mathematics. Since human language is often ambiguous, with different definitions for the same word (I could not even find a single, widely-accepted definition of “reason” in standard reference texts), many intellectuals have believed that mathematics, science, and symbolic logic are the primary means of acquiring the most certain knowledge.

However, there are types of knowledge not based on reason. These types of knowledge are difficult or impossible to express in explicit concepts and statements, but we know that they are types of knowledge because they lead to successful outcomes. In these cases, we don’t know how exactly a successful outcome was reached — that remains a black box. But we can judge that the knowledge is worthwhile by the actor’s success in achieving that outcome. There are at least six types of non-rational knowledge:

 

1. Perceptual knowledge

In a series of essays in the early twentieth century, the American philosopher William James drew a distinction between “percepts” and “concepts.” According to James, originally all human beings, like the lower life forms, gathered information from their environment in the form of perceptions and sensations (“percepts”). It was only later in human evolution that human beings created language and mathematics, which allowed them to form concepts. These concepts categorized and organized the findings from percepts, allowing communication between different humans about their perceptual experiences and facilitating the growth of reason. In James’s words, “Feeling must have been originally self-sufficing; and thought appears as a super-added function, adapting us to a wider environment than that of which brutes take account.” (William James, “Percept and Concept – The Import of Concepts“).

All living creatures have perceptual knowledge. They use their senses and brains, however primitive, to find shelter, find and consume food, evade or fight predators, and find a suitable mate. This perceptual knowledge is partly biologically ingrained and partly learned (habitual), but it is not the conceptual knowledge that reason uses. As James noted, “Conception is a secondary process, not indispensable to life.” (Percept and Concept – The Abuse of Concepts)

Over the centuries, concepts became predominant in human thinking, but James argued that both percepts and concepts were needed to fully know reality. What concepts offered humans in the form of breadth, argued James, it lost in depth. It is one thing to know the categorical concepts “desire,” “fear,” “joy,” and “suffering,” ; it is quite another to actually experience desire, fear, joy, and suffering. Even relatively objective categories such as “water,” “stars,” “trees,” “fire,” and so forth are nearly impossible to adequately describe to someone who has not seen or felt these phenomena. Concepts had to be related to particular percepts in the real world, concluded James, or they were merely empty abstractions.

In fact, most of the other non-rational types of knowledge I am about to describe below appear to be types of perceptual knowledge, insofar as they involve perceptions and sensations in making judgments. But I have broken them out into separate categories for purposes of clarity and explanation.

 

2. Emotional knowledge

In a previous post, I discussed the reality of emotional knowledge by pointing to the studies of Professor of Neuroscience Antonio Damasio (see Descartes’ Error: Emotion, Reason, and the Human Brain). Damasio studied a number of human subjects who had lost the part of their brain responsible for emotions, whether due to an accident or a brain tumor. According to Damasio, these subjects experienced a marked decline in their competence and decision-making capability after losing their emotional capacity, even though their IQs remained above-normal. They did not lose their intellectual ability, but their emotions. And that made all the difference. They lost their ability to make good decisions, to effectively manage their time, and to navigate relationships with other human beings. Their competence diminished and their productivity at work plummeted.

Why was this? According to Damasio, when these subjects lost their emotional capacity, they also lost their ability to value. And when they lost their ability to value, they lost their capacity to assign different values to the options they faced every day, leading to either a paralysis in decision-making or to repeatedly misplaced priorities, focusing on trivial tasks rather than important tasks.

Now it’s true that merely having emotions does not guarantee good decisions. We all know of people who make poor decisions because they have anger management problems, they suffer from depression, or they seem to be addicted to risk-taking. The trick is to have the right balance or disposition of emotions. Consequently, a number of scientists have attempted to formulate “EQ” tests to measure persons’ emotional intelligence.

 

3. Common life / culture

People like to imagine that they think for themselves, and this is indeed possible — but only to a limited extent. We are all embedded in a culture, and this culture consists of knowledge and practices that stretch back hundreds or thousands of years. The average English-language speaker has a vocabulary of tens of thousands of words. So how many of those words has a typical person invented? In most cases, none – every word we use is borrowed from our cultural heritage. Likewise, every concept we employ, every number we add or subtract, every tradition we follow, every moral rule we obey is transmitted to us down through the generations. If we invent a new word that becomes widely adopted, if we come up with an idea that is both completely original and worthy, that is a very rare event indeed.

You may argue, “This may well be true. But you know perfectly well that cultures, or the ‘common life’ of peoples are also filled with superstition, with backwardness, and barbarism. Moreover, these cultures can and do change over time. The use of reason, from the most intelligent people in that culture, has overcome many backward and barbarous practices, and has replaced superstition with science.” To which, I reply, “Yes, but very few people actually have original and valuable contributions to knowledge, and their contributions are often few and in specialized fields. Even these creative geniuses must take for granted most of the culture they have lived in. No one has the time or intelligence to create a plan for an entirely new society. The common life or culture of a society is a source of wisdom that cannot be done away with entirely.”

This is essentially the insight of the eighteenth century philosopher David Hume. According to Hume, philosophers are tempted to critique all the common knowledge of society as being unfounded in reason and to begin afresh with pure deductive logic, as did Descartes.  But this can only end in total skepticism and nihilism. Rather, argues Hume, “true philosophy” must work within the common life. As Donald W. Livingstone, a former professor at Emory University, has explained:

Hume defines ‘true philosophy’ as ‘reflections on common life methodized and corrected.’ . . . The error of philosophy, as traditionally conceived—and especially modern philosophy—is to think that abstract rules or ideals gained from reflection are by themselves sufficient to guide conduct and belief. This is not to say abstract rules and ideals are not needed in critical thinking—they are—but only that they cannot stand on their own. They are abstractions or stylizations from common life; and, as abstractions, are indeterminate unless interpreted by the background prejudices of custom and tradition. Hume follows Cicero in saying that ‘custom is the great guide of life.’ But custom understood as ‘methodized and corrected’ by loyal and skillful participants. (“The First Conservative,” The American Conservative, August 10, 2011)

 

4. Tacit knowledge / Intuition

Is it possible to write a perfect manual on how to ride a bicycle, one that successfully instructs a child on how to get on a bicycle for the first time and ride it perfectly? What about a perfect cookbook, one that turns a beginner into a master chef upon reading it? Or what about reading all the books in the world about art — will that give someone what they need to create great works of art? The answer to all of these questions is of course, “no.” One must have actual experience in these activities. Knowing how to do something is definitely a form of knowledge — but it is a form of knowledge that is difficult or impossible to transmit fully through a set of abstract rules and instructions. The knowledge is intuitive and habitual. Your brain and central nervous system make minor adjustments in response to feedback every time you practice an activity, until you master it as well as you can. When you ride a bike, you’re not consciously implementing a set of explicit rules inside your head, you’re carrying out an implicit set of habits learned in childhood. Obviously, talents vary, and practice can only take us so far. Some people have a natural disposition to be great athletes or artists or chefs. They can practice the same amount as other people and yet leap ahead of the rest.

The British philosopher Gilbert Ryle famously drew a distinction between two forms of knowledge: “knowing how” and “knowing that.” “Knowing how” is a form of tacit knowledge and precedes “knowing that,” i.e., knowing an explicit set of abstract propositions. Although we can’t fully express tacit knowledge in language, symbolic logic, or mathematics, we know it exists, because people can and will do better at certain activities by learning and practicing. But they are not simply absorbing abstract propositions — they are immersing themselves in a community, they are working alongside a mentor, and they are practicing with the guidance of the community and mentor. And this method of learning how also applies to learning how to reason in logic and mathematics. Ryle has pointed out that it is possible to teach a student everything there is to know about logical proofs — and that student may be able to fully understand others’ logical proofs. And yet when it comes to doing his or her own logical proofs, that student may completely fail. The student knows that but does not know how.

A recent article on the use of artificial intelligence in interpreting medical scans points out that it is virtually impossible for humans to be fully successful in interpreting medical scans simply by applying a set of rules. The people who were best at diagnosing medical scans were not applying rules but engaging in pattern recognition, an activity that requires talent and experience but can’t be fully learned in a text. Many times when expert diagnosticians are asked how they came to a certain conclusion, they have difficulty describing their method in words — they may say a certain scan simply “looks funny.” One study described in the article concluded that pattern recognition uses a part of the brain responsible for naming things:

‘[A] process similar to naming things in everyday life occurs when a physician promptly recognizes a characteristic and previously known lesion,’ the researchers concluded. Identifying a lesion was a process similar to naming the animal. When you recognize a rhinoceros, you’re not considering and eliminating alternative candidates. Nor are you mentally fusing a unicorn, an armadillo, and a small elephant. You recognize a rhinoceros in its totality—as a pattern. The same was true for radiologists. They weren’t cogitating, recollecting, differentiating; they were seeing a commonplace object.

Oddly enough, it appears to be possible to teach computers implicit knowledge of medical scans. A computing strategy known as a “neural network” attempts to mimic the human brain by processing thousands or millions of patterns that are fed into the computer. If the computer’s answer is correct, the connection responsible for that answer is strengthened; if the answer is incorrect, that connection is weakened. Over time, the computer’s ability to arrive at the correct answer increases. But there is no set of rules, simply a correlation built up over thousands and thousands of scans. The computer remains a “black box” in its decisions.

 

5. Creative knowledge

It is one thing to absorb knowledge — it is quite another to create new knowledge. One may attend school for 15 or 20 years and diligently apply the knowledge learned throughout his or her career, and yet never invent anything new, never achieve any significant new insight. And yet all knowledge was created by various persons at one point in the past. How is this done?

As with emotional knowledge, creative knowledge is not necessarily an outcome of high intelligence. While creative people generally have an above-average IQ, the majority of creative people do not have a genius-level IQ (upper one percent of the population). In fact, most geniuses do not make significant creative contributions. The reason for this is that new inventions and discoveries are rarely an outcome of logical deduction but of a “free association” of ideas that often occurs when one is not mentally concentrating at all. Of note, creative people themselves cannot precisely describe how they get their ideas. The playwright Neil Simon once said, “I don’t write consciously . . . I slip into a state that is apart from reality.” According to one researcher, “[C]reative people are better at recognizing relationships, making associations and connections, and seeing things in an original way — seeing things that others cannot see.” Moreover, this “free association” of ideas actually occurs most effectively while a person is at rest mentally: drifting off to sleep, taking a bath or shower, or watching television.

Mathematics is probably the most precise and rigorous of disciplines, but mathematical discovery is so mysterious that mathematicians themselves have compared their insights to mysticism. The great French mathematician Henri Poincare believed that the human mind worked subliminally on problems, and his work habit was to spend no more than two hours at a time working on mathematics. Poincare believed that his subconscious would continue working on problems while he conducted other activities, and indeed, many of his great discoveries occurred precisely when he was away from his desk. John von Neumann, one of the best mathematicians of the twentieth century, also believed in the subliminal mind. He would sometimes go to sleep with a mathematical problem on his mind and wake up in the middle of the night with a solution. Reason may be used to confirm or disconfirm mathematical discoveries, but it is not the source of the discoveries.

 

6. The Moral Imagination

Where do moral rules come from? Are they handed down by God and communicated through the sacred texts — the Torah, the Bible, the Koran, etc.? Or can morals be deduced by using pure reason, or by observing nature and drawing objective conclusions, they same way that scientists come to objective conclusions about physics and chemistry and biology?

Centuries ago, a number of philosophers rejected religious dogma but came to the conclusion that it is a fallacy to suppose that reason is capable of creating and defending moral rules. These philosophers, known as the “sentimentalists,” insisted that human emotions were the root of all morals. David Hume argued that reason in itself had little power to motivate us to help others; rather sympathy for others was the root of morality. Adam Smith argued that the basis of sympathy was the moral imagination:

As we have no immediate experience of what other men feel, we can form no idea of the manner in which they are affected, but by conceiving what we ourselves should feel in the like situation. Though our brother is upon the rack, as long as we ourselves are at our ease, our senses will never inform us of what he suffers. They never did, and never can, carry us beyond our own person, and it is by the imagination only that we can form any conception of what are his sensations. . . . It is the impressions of our own senses only, not those of his, which our imaginations copy. By the imagination we place ourselves in his situation, we conceive ourselves enduring all the same torments, we enter as it were into his body, and become in some measure the same person with him, and thence form some idea of his sensations, and even feel something which, though weaker in degree, is not altogether unlike them. His agonies, when they are thus brought home to ourselves, when we have thus adopted and made them our own, begin at last to affect us, and we then tremble and shudder at the thought of what he feels. (The Theory of Moral Sentiments, Section I, Chapter I)

Adam Smith recognized that it was not enough to sympathize with others; those who behaved unjustly, immorally, or criminally did not always deserve sympathy. One had to make judgments about who deserved sympathy. So human beings imagined “a judge between ourselves and those we live with,” an “impartial and well-informed spectator” by which one could make moral judgments. These two imaginations — of sympathy and of an impartial judge — are the real roots of morality for Smith.

__________________________

 

This brings us to our final topic: the role of non-rational forms of knowledge within reason itself.

Aristotle is regarded as the founding father of logic in the West, and his writings on the subject are still influential today. Aristotle demonstrated a variety of ways to deduce correct conclusions from certain premises. Here is one example that is not from Aristotle, but which has been used as an example of Aristotle’s logic:

All men are mortal. (premise)

Socrates is a man. (premise)

Therefore, Socrates is mortal. (conclusion)

The logic is sound, and the conclusion follows from the premises. But this simple example was not at all typical of most real-life puzzles that human beings faced. And there was an additional problem.

If one believed that all knowledge had to be demonstrated through logical deduction, that rule had to be applied to the premises of the argument as well. Because if the premises were wrong, the whole argument was wrong. And every argument had to begin with at least one premise. Now one could construct another argument proving the premise(s) of the first argument — but then the premises of the new argument also had to be demonstrated, and so forth, in an infinite regress.

To get out of this infinite regress, some argued that deduced conclusions could support premises in the same way as the premises supported a conclusion, a type of circular support. But Aristotle rejected this argument as incoherent. Instead, Aristotle offered an argument that to this day is regarded as difficult to interpret.

According to Aristotle, there is another cognitive state, known as “nous.” It is difficult to find an English equivalent of this word, and the Greeks themselves seemed to use different meanings, but the word “nous” has been translated as “insight,” “intuition,” or “intelligence.” According to Aristotle, nous makes it possible to know certain things immediately without going through a process of argument or logical deduction. Aristotle compares this power to perception, noting that we have the power to discern different colors with our eyesight even without being taught what colors are. It is an ingrained type of knowledge that does not need to be taught. In other words, nous is a type of non-rational knowledge — tacit, intuitive, and direct, not requiring concepts!

Is Truth a Type of Good?

[T]ruth is one species of good, and not, as is usually supposed, a category distinct from good, and co-ordinate with it. The true is the name of whatever proves itself to be good in the way of belief. . . .” – William James,  “What Pragmatism Means

Truth is a static intellectual pattern within a larger entity called Quality.” – Robert Prisig, Lila

 

Does it make sense to think of truth as a type of good? The initial reaction of most people to this claim is negative, sometimes strongly so. Surely what we like and what is true are two different things. The reigning conception of truth is known as the “correspondence theory of truth,” which argues simply that in order for a statement to be true it must correspond to reality. In this view, the words or concepts or claims we state must match real things or events, and match them exactly, whether those things are good or not.

The American philosopher William James (1842-1910) acknowledged that our ideas must agree with reality in order to be true. But where he parted company with most of the rest of the world was in what it meant for an idea to “agree.” In most cases, he argued, ideas cannot directly copy reality. According to James, “of many realities our ideas can only be symbols and not copies. . . . Any idea that helps us to deal, whether practically or intellectually, with either the reality or its belongings, that doesn’t entangle our progress in frustrations, that fits, in fact, and adapts our life to the reality’s whole setting, will agree sufficiently to meet the requirement.” He also argued that “True ideas are those we can assimilate, validate, corroborate, and verify.” (“Pragmatism’s Conception of Truth“) Many years later, Robert Pirsig argued in Zen and the Art of Motorcycle Maintenance and Lila that the truths of human knowledge, including science, were developed out of an intuitive sense of good or “quality.”

But what does this mean in practice? Many truths are unpleasant, and reality often does not match our desires. Surely truth should correspond to reality, not what is good.

One way of understanding what James and Pirsig meant is to examine the origins and development of language and mathematics. We use written language and mathematics as tools to make statements about reality, but the tools themselves do not merely “copy” or even strictly correspond to reality. In fact, these tools should be understood as symbolic systems for communication and understanding. In the earliest stages of human civilization, these symbolic systems did try to copy or correspond to reality; but the strict limitations of “corresponding” to reality was in fact a hindrance to the truth, requiring new creative symbols that allowed knowledge to advance.

 

_______________________________

 

The first written languages consisted of pictograms, that is, drawn depictions of actual things — human beings, stars, cats, fish, houses. Pictograms had one big advantage: by clearly depicting the actual appearance of things, everyone could quickly understand them. They were the closest thing to a universal language; anyone from any culture could understand pictograms with little instruction.

However, there were some pretty big disadvantages to the use of pictograms as a written language. Many of the things we all see in everyday life can be clearly communicated through drawings. But there are a lot of ideas, actions, abstract concepts, and details that are not so easily communicated through drawings. How does one depict activities such as running, hunting, fighting, and falling in love, while making it clear that one is communicating an activity and not just a person? How does one depict a tribe, kingdom, battle, or forest, without becoming bogged down in drawing pictograms of all the persons and objects involved? How does one depict attributes and distinguish between specific types of people and specific types of objects? How does one depict feelings, emotions, ideas, and categories? Go through a dictionary at random sometime and see how many words can be depicted in a clear pictogram. There are not many. There is also the problem of differences in artistic ability and the necessity of maintaining standards. Everyone may have a different idea of what a bird looks like and different abilities in drawing a bird.

These limitations led to an interesting development in written language: over hundreds or thousands of years, pictograms became increasingly abstract, to the point at which their form did not copy or correspond to what they represented at all. This development took place across civilizations, as seen is this graphic, in which the top pictograms represent the earliest forms and the bottom ones coming later:

(Source: Wikipedia, https://en.wikipedia.org/wiki/History_of_writing)

Eventually, pictograms were abandoned by most civilizations altogether in favor of alphabets. By using combinations of letters to represent objects and ideas, it became easier for people to learn how to read and write. Instead of having to memorize tens of thousands of pictograms, people simply needed to learn new combinations of letters/sounds. No artistic ability was required.

One could argue that this development in writing systems does not address the central point of the correspondence theory of truth, that a true statement must correspond to reality. In this theory, it is perfectly OK for an abstract symbol to represent something. If someone writes “I caught a fish,” it does not matter if the person draws a fish or uses abstract symbols for a fish, as long as this person, in reality, actually did catch a fish. From the pragmatic point of view, however, the evolution of human symbolic systems toward abstraction is a good illustration of pragmatism’s main point: by making our symbolic systems better, human civilizations were able to communicate more, understand more, educate more, and acquire more knowledge. Pictograms fell short in helping us “deal with reality,” and that’s why written language had to advance above and beyond pictograms.

 

Let us turn to mathematics. The earliest humans were aware of quantities, but tended to depicted quantities in a direct and literal manner. For small quantities, such as two, the ancient Egyptians would simply draw two pictograms of the object. Nothing could correspond to reality better than that. However, for larger quantities, it was hard, tedious work to draw the same pictogram over and over. So early humans used tally marks or hash marks to indicate quantities, with “four” represented as four distinct marks:  | | | | and then perhaps a symbol or pictogram of the object. Again, these earliest depictions of numbers were so simple and direct, the correspondence to reality so obvious, that they were easily understood by people from many different cultures.

In retrospect, tally marks appear to be very primitive and hardly a basis for a mathematical system. However, I argue that tally marks were actually a revolutionary advance in how human beings understood quantities — because for the first time, quantity became an abstraction disconnected from particular objects. One did not have to make distinctions between three cats, three kings, or three bushels of grain; the quantity “three” could be understood on its own, without reference to what it was representing. Rather than drawing three cats, three kings, or three bushels of grain, one could use | | |  to represent any group of three objects.

The problem with tally marks, of course, was that this system could not easily handle large quantities or permit complex calculations. So, numerals were invented. The ancient Egyptian numeral system used tally marks for numbers below ten, but then used other symbols for larger quantities: ten, hundred, thousand, and so forth.

The ancient Roman numeral system also evolved out of tally marks, with | | | or III representing “three,” but with different symbols for five (V), ten (X), fifty (L), hundred (C), five hundred (D), and thousand (M). Numbers were depicted by writing the largest numerical symbols on the left and the smallest to the right, adding the symbols together to get the quantity (example: 1350 = MCCCL); a smaller numerical symbol to the left of a larger numerical symbol required subtraction (example: IX = 9). As with the Egyptian system, Roman numerals were able to cope with large numbers, but rather than the more literal depiction offered by tally marks, the symbols were a more creative interpretation of quantity, with implicit calculations required for proper interpretation of the number.

The use of numerals by ancient civilizations represented a further increase in the abstraction of quantities. With numerals, one could make calculations of almost any quantity of any objects, even imaginary objects or no objects. Teachers instructed children how to use numerals and how to make calculations, usually without any reference to real-world objects. A minority of intellectuals studied numbers and calculations for many years, developing general theorems about the relationships between quantities. And before long, the power and benefits of mathematics became such that mathematicians became convinced that mathematics were the ultimate reality of the universe, and not the actual objects we once attached to numbers. (On the theory of “mathematical Platonism,” see this post.)

For thousands of years, Roman numerals continued to be used. Rome was able to build and administer a great empire, while using these numerals for accounting, commerce, and engineering. In fact, the Romans were famous for their accomplishments in engineering. It was not until the 14th century that Europe began to discover the virtues of the Hindu-Arabic numeral system. And although it took centuries more, today the Hindu-Arabic system is the most widely-used system of numerals in the world.

Why is this?

The Hindu-Arabic system is noted for two major accomplishments: its positional decimal system and the number zero. The “positional decimal system” simply refers to a base 10 system in which the value of a digit is based upon it’s position. A single numeral may be multiplied by ten or one hundred or one thousand, depending on its position in the number. For example, the number 832 is:  8×100 + 3×10 + 2. We generally don’t notice this, because we spent years in school learning this system, and it comes to us automatically that the first digit “8” in 832 means 8 x 100. Roman numerals never worked this way. The Romans grouped quantities in symbols representing ones, fives, tens, fifties, one hundreds, etc. and added the symbols together. So the Roman version of 832 is DCCCXXXII (500 + 100 + 100 + 100 + 10+ 10 + 10 + 1 + 1).

Because the Roman numeral system is additive, adding Roman numbers is easy — you just combine all the symbols. But multiplication is harder, and division is even harder, because it’s not so easy to take apart the different symbols. In fact, for many calculations, the Romans used an abacus, rather than trying to write everything down. The Hindu-Arabic system makes multiplication and division easy, because every digit, depending on its placement, is a multiple of 1, 10, 100, 1000, etc.

The invention of the positional decimal system took thousands of years, not because ancient humans were stupid, but because symbolizing quantities and their relationships in a way that is useful is actually hard work and requires creative interpretation. You just don’t look at nature and say, “Ah, there’s the number 12, from the positional decimal system!”

In fact, even many of the simplest numbers took thousands of years to become accepted. The number zero was not introduced to Europe until the 11th century and it took several more centuries for zero to become widely used. Negative numbers did not appear in the west until the 15th century, and even then, they were controversial among the best mathematicians until the 18th century.

The shortcomings of seeing mathematical truths as a simple literal copying of reality become even clearer when one examines the origins and development of weights and measures. Here too, early human beings started out by picking out real objects as standards of measurement, only to find them unsuitable in the long run. One of the most well-known units of measurement in ancient times was the cubit, defined as the length of a man’s forearm from elbow to the tip of the middle finger. The foot was defined as the length of a man’s foot. The inch was the width of a man’s thumb. A basic unit of weight was the grain, that is, a single grain of barley or wheat. All of these measures corresponded to something real, but the problem, of course, was that there was a wide variation in people’s body parts, and grains could also vary in weight. What was needed was standardization; and it was not too long before governing authorities began to establish common standards. In many places throughout the world, authorities agreed that a single definition of each unit, based on a single object kept in storage, would be the standard throughout the land. The objects chosen were a matter of social convention, based upon convenience and usefulness. Nature or reality did not simply provide useful standards of measurement; there was too much variation even among the same types of objects provided by nature.

 

At this point, advocates of the correspondence theory of truth may argue, “Yes, human beings can use a variety of symbolic systems, and some are better than others. But the point is that symbolic systems should all represent the same reality. No matter what mathematical system you use, two plus two should still equal four.”

In response, I would argue that for very simple questions (2+2=4), the type of symbolic system you use will not make a big difference — you can use tally marks, Roman numerals, or Hindu-Arabic numerals. But the type of symbolic system you use will definitely make a difference in how many truths you can uncover and particularly how many complicated truths you can grasp. Without good symbolic systems, many truths will remain forever hidden from us.  As it was, the Roman numeral system was probably responsible for the lack of mathematical accomplishments of the Romans, even if their engineering was impressive for the time. And in any case, the pragmatic theory of truth already acknowledges that truth must agree with reality — it just cannot be a copy of reality. In the words of William James, an ideal symbolic system “helps us to deal, whether practically or intellectually, with either the reality or its belongings . . . doesn’t entangle our progress in frustrations, that fits, in fact, and adapts our life to the reality’s whole setting.”(“Pragmatism’s Conception of Truth“)

What is “Transcendence”?

You may have noticed number of writings on religious topics that make reference to “transcendence” or “the transcendent.” However, the word “transcendence” is usually not very well defined, if it is defined at all. The Catechism of the Catholic Church makes several references to transcendence, but it’s not completely clear what transcendence means other than the infinite greatness of God, and the fact that God is “the inexpressible, the incomprehensible, the invisible, the ungraspable.” For those who value reason and precise arguments, this vagueness is unsatisfying. Astonishingly, the fifteen volume Catholic Encyclopedia (1907-1914) did not even have an entry on “transcendence,” though it did have an entry on “transcendentalism,” a largely secular philosophy with a variety of schools and meanings. (The New Catholic Encyclopedia in 1967 finally did have an entry on “transcendence.”)

The Oxford English Dictionary defines “transcendence” as “the action or fact of transcending, surmounting, or rising above . . . ; excelling, surpassing; also the condition or quality of being transcendent, surpassing eminence or excellence. . . .” The reference to “excellence” is probably key to understanding what “transcendence” is. In my previous essay on ancient Greek religion, I pointed out that areté, the Greek word for “excellence,” was a central idea of Greek culture and one cannot fully appreciate the ancient Greek pagan religion without recognizing that Greek devotion to excellence was central to their religion. The Greeks depicted their gods as human, but with perfect physical forms. And while the behavior of the Greek gods was often dubious from a moral standpoint, the Greek gods were still regarded as the givers of wisdom, order, justice, love, and all the institutions of human civilization.

The odd thing about transcendence is that because it seems to refer to a striving for an ideal or a goal that goes above and beyond an observed reality, transcendence has something of an unreal quality. It is easy to see that rocks and plants and stars and animals and humans exist. But the transcendent cannot be directly seen, and one cannot prove the transcendent exists. It is always beyond our reach.

Theologians refer to transcendence as one of the two natures of God, the other being “immanence.” Transcendence refers to the higher nature of God and immanence refers to God as He currently works in reality, i.e., the cosmic order. The division between those who believe in a personal God and those who believe in an impersonal God reflects the division between the transcendent and immanent view of God. It is no surprise that most scientists who believe in God tend more to the view of an impersonal God, because their whole life is dedicated to examining the reality of the cosmic order, which seems to operate according to a set of rules rather than personal supervision.

Of course, atheists don’t even believe in an impersonal God. One famous atheist, Sigmund Freud, argued that religion was an illusion, a simple exercise in “wish fulfillment.” According to Freud, human beings desired love, immortality, and an end to suffering and pain, so they gravitated to religion as a solution to the inevitable problems and limitations of mortal life. Marxists have a similar view of religion, seeing promises of an afterlife as a barrier to improving actual human life.

Another view was taken by the American philosopher George Santayana, whose book, Reason in Religion, is one of the very finest books ever written on the subject of religion. According to Santayana, religion was an imaginative and poetic interpretation of life; religion supplied ideal ends to which human beings could orient their lives. Religion failed only when it attributed literal truth to these imaginative ideal ends. Thus religions should be judged, according to Santayana, according to whether they were good or bad, not whether they were true or false.

This criteria for judging religion would appear to be irrational, both to rationalists and to those who cling to faith. People tend to equate worship of God with belief in God, and often see literalists and fundamentalists as the most devoted of all. But I would argue that worship is the act of submission to ideal ends, which hold value precisely because they are higher than actually existing things, and therefore cannot pass traditional tests of truth, which call for a correspondence to reality.

In essence, worship is submission to a transcendent Good. We see good in our lives all the time, but we know that the particular goods we experience are partial and perishable. Freud is right that we wish for goods that cannot be acquired completely in our lives and that we use our imaginations to project perfect and eternal goods, i.e. God and heaven. But isn’t it precisely these ideal ends that are sacred, not the flawed, perishable things that we see all around us? In the words of Santayana,

[I]n close association with superstition and fable we find piety and spirituality entering the world. Rational religion has these two phases: piety, or loyalty to necessary conditions, and spirituality, or devotion to ideal ends. These simple sanctities make the core of all the others. Piety drinks at the deep, elemental sources of power and order: it studies nature, honours the past, appropriates and continues its mission. Spirituality uses the strength thus acquired, remodeling all it receives, and looking to the future and the ideal. (Reason in Religion, Chapter XV)

People misunderstand ancient Greek religion when they think it is merely a set of stories about invisible personalities who fly around controlling nature and intervening in human affairs. Many Greek myths were understood to be poetic creations, not history; there were often multiple variations of each myth, and people felt free to modify the stories over time, create new gods and goddesses, and change the functions/responsibilities of each god. Rational consistency was not expected, and depictions of the appearance of any god or goddess in statues or painting could vary widely. For the Greeks, the gods were not just personalities, but transcendent forms of the Good. This is why Greek religion also worshipped idealized ends and virtues such as “Peace,” “Victory,” “Love,” “Democracy,” “Health,” “Order,” and “Wealth.” The Greeks represented these idealized ends and virtues as persons (usually females) in statues, built temples for them, and composed worshipful hymns to them. In fact, the tendency of the Greeks to depict any desired end or virtue as a person was so prevalent, it is sometimes difficult for historians to tell if a particular statue or temple was meant for an actual goddess/god or was a personified symbol. For the ancient Greeks, the distinction may not have been that important, for they tended to think in highly poetic and metaphorical terms.

This may be fine as an interpretation of religion, you may say, but does it make sense to conceive of imaginative transcendent forms as persons or spirits who can actually bring about the goods and virtues that we seek? Is there any reason to think that prayer to Athena will make us wise, that singing a hymn to Zeus will help us win a war, or that a sacrifice at the temples of “Peace” or “Health” will bring us peace or health? If these gods are not powerful persons or spirits that can hear our prayers or observe our sacrifices, but merely poetic representations or symbols, then what good are they and what good is worship?

My view is this: worship and prayer do not affect natural causation. Storms, earthquakes, disease, and all the other calamities that have afflicted humankind from the beginning are not affected by prayer. Addressing these calamities requires research into natural causation, planning, human intervention, and technology. What worship and prayer can do, if they are directed at the proper ends, is help us transcend ourselves, make ourselves better people, and thereby make our societies better.

In a previous essay, I reviewed the works of various physicists, who concluded that reality consists not of tiny, solid objects but rather bundles of properties and qualities that emerge from potentiality to actuality. I think this dynamic view of reality is what we need in order to understand the relationship between the transcendent and the actual. We worship the transcendent not because we can prove it exists, but because the transcendent is always drawing us to a higher life, one that excels or supersedes who we already are. The pantheism of Spinoza and Einstein is more rational than traditional myths that attributed natural events to a personal God who created the world in six days and subsequently punished evil by causing natural disasters. But pantheism is ultimately a poor basis for religion. What would be the point of worshipping the law of gravity or electromagnetism or the elements in the periodic table? These foundational parts of the universe are impressive, but I would argue that aspiring to something higher is fundamental not only to human nature but to the universe itself. The universe, after all, began simply with a concentrated point of energy; then space expanded and a few elements such as hydrogen and helium formed; only after hundreds of millions of years did the first stars, planets, and other elements necessary for life began to emerge.

Worshipping the transcendent orients the self to a higher good, out of the immediate here-and-now. And done properly, worship results in worthy accomplishments that improve life. We tend to think of human civilization as being based on the rational mastery of a body of knowledge. But all knowledge began with an imagined transcendent good. The very first lawgivers had no body of laws to study; the first ethicists had no texts on morals to consult; the first architects had no previous designs to emulate; the first mathematicians had no symbols to calculate with; the first musicians had no composers to study. All our knowledge and civilization began with an imagined transcendent good. This inspired experimentation with primitive forms; and then improvement on those initial primitive efforts. Only much later, after many centuries, did the fields of law, ethics, architecture, mathematics, and music become a body of knowledge requiring years of study. So we attribute these accomplishments to reason, forgetting the imaginative leaps that first spurred these fields.

 

Are Human Beings Just Atoms?

In a previous essay on materialism, I discussed the bizarre nature of phenomena on the subatomic level, in which particles have no definite position in space until they are observed. Referencing the works of several physicists and philosophers, I put forth the view that reality consists not of tiny, solid objects but rather bundles of properties and qualities that emerge from potentiality to actuality. In this view, when one breaks down reality into smaller and smaller parts, one does not reach the fundamental units of matter; rather, one is gradually unbundling properties and qualities until the smallest objects no longer even have a definite position in space!

Why is this important? One reason is that the enormous prestige and accomplishments of science have sometimes led us down the wrong path in properly describing and interpreting reality. Science excels at advancing our knowledge of how things work, by breaking down wholes into component parts and manipulating those parts into better arrangements that benefit humanity. This is how we got modern medicine, computers, air conditioning, automobiles, and space travel. However, science sometimes falls short in properly describing and interpreting reality, precisely because it focuses more on the parts than the wholes.

This defect in science becomes particularly glaring when certain scientists attempt to describe what human beings are like. All too often there is a tendency to reduce humans to their component parts, whether these parts are chemical elements (atoms), chemical compounds (molecules), or the much larger molecules known as genes. However, while these component parts make up human beings, there are properties and qualities in human beings that cannot be adequately described in terms of these parts.

Marcelo Gleiser, a physicist at Dartmouth College, argues that “life is the property of a complex network of biochemical reactions . . . a kind of hungry chemistry that is able to duplicate itself.” Biologist Richard Dawkins claims that humans are “just gene machines,” and “living organisms and their bodies are best seen as machines programmed by the genes to propagate those very same genes,” though he qualifies his statement by noting that “there is a very great deal of complication, and indeed beauty in being a gene machine.” Philosopher Daniel Dennett claims that human beings are “moist robots” and the human mind is a collection of computer-like information processes which happen to take place in carbon-based rather than silicon-based hardware.

Now it is true that human beings are composed of atoms that are the basis of chemicals and molecules, that are the basis of chemical compounds, such as genes. The issue, however, is whether describing the parts that compose a human being is the same as describing the whole human being. Yes, human beings are composed of atoms of oxygen, carbon, hydrogen, nitrogen, calcium, and phosphorous. But these atoms can be found in many, many places throughout the universe, in varying quantities and combinations, and they do not have human qualities unless and until they are organized in just the right way. Likewise, genes are ubiquitous in life forms ranging from mammals to lizards to plants to bacteria. Even viruses have genes, though most scientists argue that viruses are not true life forms because they need a host to reproduce. Nevertheless, while human beings share a very few properties and qualities with bacteria and viruses, humans clearly have many properties and qualities that the lower life forms do not.

In fact, recognizing the very difference between life and death can be lost by excessive focus on atoms and molecules. Consider the following: an emergency room doctor treats a patient suffering from a heart attack. Despite the physician’s best efforts, despite all of the doctor’s training and knowledge, the patient dies on the table. So what is the difference between the patient that has died and the patient as he was several hours ago? The quantity and types of atoms composing the body are approximately the same as when the patient was alive. So what has changed? Obviously, the properties and qualities expressed by the organization of the atoms in the human being has changed. The heart no longer supplies blood to the rest of the body, the lungs no longer supply oxygen, the brain no longer has electrical activity, the human being no longer has the ability to run or walk or jump or talk or think or love. Atoms have to be organized in an extremely precise manner in order for these properties and qualities to emerge, and this organization has been lost. So if we are really going to accurately describe what a human being is, we have to refer not just to the atoms, but to the overall organization or form.

The issue of form is what separates the ancient Greek philosophers Democritus and Plato. Both philosophers believed that the universe and everything in it was composed of atoms; but Democritus thought that nothing existed but atoms and the void (space), whereas Plato believed that atoms were arranged by a creator, who, being essentially good, used ideal forms as a blueprint. Contrary to the views of Judaism, Christianity, and Islam, however, Plato believed that the creator was not omnipotent, and was forced to work with imperfect matter to do the best job possible, which is why most created objects and life forms were imperfect and fell short of the ideal forms.

Democritus would no doubt dismiss Plato’s ideal forms as being unreal — after all, forms are not something solid, so how can anything that is not solid, not made of material, exist at all? But as I’ve pointed out, the atoms that compose the human body are found everywhere, whereas actual, living human beings have these same atoms organized in a precise, particular form. In other words, in order to understand anything, it is not enough to break it down into parts and study the parts; one has to look at the whole. The properties and qualities of a living human being, as a whole, definitely do exist, or we would not know how to distinguish a living human being from a dead human being or any other existing thing composed of the same atoms.

The debate between Democritus and Plato points to a difference in ways of knowing that persist to this day: analytic knowledge and holistic knowledge. Analytic knowledge is pursued by science and reason; holistic knowledge is pursued by religion, art, and the humanities. The prestige of science and its technological accomplishments has elevated analytic understanding above all other forms of knowledge, but we remain lost without holistic understanding.

What precisely is “analytic knowledge”? The word “analyze” means “to study or determine the nature and relationship of the parts (of something) by analysis.” Synonyms for “analyze” include “break down,” “cut,” “deconstruct,” and “dissect.” In fact, the word “analysis” is derived from the New Latin word analyein, meaning “to break up.” Analysis is an extremely valuable tool and is responsible for human progress in all sorts of areas. But the knowledge derived from analysis is primarily a description and guide to how things work. It reduces knowledge of the whole to knowledge of the parts, which is fine if you want to take something apart and put it back together. But the knowledge of how things work is not the same as the knowledge of what things are as a whole, what qualities and properties they have, and the value of those qualities and properties. This latter knowledge is holistic knowledge.

The word “holism,” based on the ancient Greek word for “whole” (holos), was coined in the early twentieth century in order to promote the view that all systems, living or not, should be viewed as wholes and not just as a collection of parts or the sum of parts. It’s no accident that the words “whole,” “heal,” healthy,” and “holy” are linguistically related. The problems of sickness, malnutrition, and injury were well-known to the ancients, and it was natural for them to see these problems as a disturbance to the whole human being, rendering a person incomplete and missing certain vital functions. Wholeness was an ideal end, which made wholeness sacred (holy) as well. (For an extended discussion of analytic/reductionist knowledge vs. holistic knowledge, see this post.)

Holistic knowledge is not just about ideal physical health. It’s about ideal forms in all aspects, including the qualities we associate with human beings we admire: wisdom, strength, beauty, courage, love, kindness. As mistaken as religions have been in understanding natural causation, it is the devotion to ideal forms that is really the essence of religion. The ancient Greeks worshipped excellence, as embodied in their gods; Confucians were devoted to family ties and duties; the Jews submitted themselves to the laws of the one God; Christians devoted themselves to the love of God, embodied in Christ.

Holistic knowledge provides no guidance as to how to conduct surgery or build a computer or launch a rocket; but it does provide insight into the ethics of medicine, the desirability or hazards of certain types of technology, and the proper ends of human beings. All too often, contemporary secular societies expect new technologies to improve human lives and pay no heed to ideal human forms, on the assumption that ideal forms are a fantasy. Then we are shocked when the new technologies are abused and not only bring out the worst in human nature but enhance the power of the worst.

Materialism: There’s Nothing Solid About It!

[I]n truth there are only atoms and the void.” – Democritus

In the ancient Greek transition from mythos to logos, stories about the world and human lives being shaped by gods and goddesses gradually came to be replaced by new explanations from philosophers. Among these philosophers were the “atomists,” including Leucippus and Democritus. Later, the Roman philosopher and poet Lucretius expounded an atomist view of the universe. The atomists were regarded as being among the first atheists and the first materialists — if they did acknowledge the existence of the gods (probably due to public pressures), they argued that the gods had no active influence on the world. Although the atomists’ understanding of the atom was primitive and far from our modern scientific understanding — they did not possess particle accelerators, after all — they were remarkably farsighted about the actual workings of nature. To this day, the symbol of the American Atheists is a depiction of the atom:

However, the ancient atomists’ conception of how the universe is constructed, with solid particles of matter combining to make complex organizational structures, has become problematic given the findings of atomic physics in the past hundred years. Increasingly, scientists have found that reality consists not of solid matter, but of organizational principles and qualities that give us the impression of solidity. And while this new view does not restore the Greek gods to prominence, it does raise questions about how we ought to understand and interpret reality.

_________________________

 

Leucippus and Democritus lived in the fifth century BC. While it is difficult to disentangle their views because of gaps in the historical record, both philosophers argued that all existence was ultimately based on tiny, indestructible particles (“atoms”) and empty space. While not explicitly denying the existence of the gods, the philosophy of Leucippus and Democritus made it clear that the gods had no significant role in the creation or maintenance of the universe. Rather, atoms existed eternally and moved randomly in empty space, until they collided and began to form larger units, leading to the growth of stars and planets and various life forms. The differences between types of matter, such as iron, water, and air were due to differences in the atoms that composed this matter. Atoms could join with each other because of a variety of hooks or sockets in the atoms that allowed for attachments.

Hundreds of years later, the Roman philosopher Lucretius expanded upon atomist theory in his poem De rerum natura (On the Nature of Things). Lucretius explained that the universe consisted of an infinite number of atoms moving and combining under the influence of laws and random chance, not the decisions of gods. Lucretius also denied the existence of an afterlife, and argued that human beings should not fear death. Although Lucretius was not explicitly atheistic, his work was perceived by Christians in the Middle Ages as being essentially atheistic in outlook and was denounced for that reason.

Not all of the ancient philosophers, even those most committed to reason, accepted the atomist view of existence. It is reported that Plato hated Democritus and wished that his books be burned. Plato did accept that there were different types of matter composing the world, but posited that the particles were perfect triangles, brought together in various combinations. In addition, these triangles were guided by a cosmic intelligence, and were not colliding randomly without purpose. For Plato, the ultimate reality was the Good, and the things we saw all around us were shadows of perfect, ideal forms that were the blueprint for the less-perfect existing things.

For two thousand years after Democritus, atomism as a worldview remained a minority viewpoint — after all, religion was still an important institution in societies, and no one had yet seen or confirmed the existence of atoms. But by the nineteenth century, advances in science had accumulated to the point at which atomism became increasingly popular as a view of reality. No longer was there a need for God or gods to explain nature and existence; atoms and laws were all that were needed. The philosophy of materialism — the view that matter is the fundamental substance in nature and that all things, including mental aspects and consciousness, are results of material interactions — became increasingly prevalent. The political-economic ideology of communism, which at one time ruled one-third of the world’s population, was rooted in materialism. In fact, Karl Marx wrote his doctoral dissertation on Democritus’ philosophy of nature, and Vladimir Lenin authored a philosophical book on materialism, including chapters on physics, that was mandatory reading in the higher education system of the Soviet Union.

As physicists conducted increasingly sophisticated experiments on the smallest parts of nature, however, certain results began to challenge the view that atoms were solid particles of matter. For one thing, it was found that atoms themselves were not solid throughout but consisted of electrons orbiting around an extremely small nucleus of protons and neutrons. The nucleus of an atom is actually 100,000 times smaller than the entire atom, even though the nucleus contains almost the entire mass of the atom. As one article has put it, “if the nucleus were the size of a peanut, the atom would be about the size of a baseball stadium.” For that reason, some have concluded that all “solid” objects in the universe, including human beings, are actually about 99.9999999 percent empty space, because of the empty space in the atoms! Others respond that in fact it is not “empty space” in the atom, but rather a “field” or “wave function” — and here it gets confusing.

In fact, subatomic particles do not have a precise location in space; they behave like a fuzzy wave until they interact with an observerand then the wave “collapses” into a particle. The bizarreness of this activity confounded the brightest scientists in the world, and to this day, there are arguments among scientists about what is “really” going on at the subatomic level.

The currently dominant interpretation of subatomic physics, known as the “Copenhagen interpretation,” was developed by the physicists Werner Heisenberg and Niels Bohr in the 1920s. Heisenberg subsequently wrote a book, Physics and Philosophy to explain how atomic physics changed our interpretation of reality. According to Heisenberg, the traditional scientific view of material objects and particles existing objectively, whether we observe them or not, could no longer be upheld. Rather than existing as solid objects, subatomic particles existed as “probability waves” — in Heisenberg’s words, “something standing in the middle between the idea of an event and the actual event, a strange kind of physical reality just in the middle between possibility and reality.” (Physics and Philosophy, p. 41 — page numbers are taken from the 1999 edition published by Prometheus books). According to Heisenberg:

The probability function does . . . not describe a certain event but, at least during the process of observation, a whole ensemble of possible events. The observation itself changes the probability function discontinuously; it selects of all possible events the actual one that has taken place. . . Therefore, the transition from the ‘possible’ to the ‘actual’ takes place during the act of observation. If we want to describe what happens in an atomic event, we have to realize that the word ‘happens’ can apply only to the observation, not to the state of affairs between two observations. It applies to the physical, not the psychical act of observation, and we may say that the transition from the ‘possible’ to the ‘actual’ takes place as soon as the interaction of the object with the measuring device, and thereby with the rest of the world, has come into play. (pp. 54-55)

Later in his book, Heisenberg writes: “If one wants to give an accurate description of the elementary particle — and here the emphasis is on the word ‘accurate’ — the only thing that can be written down as a description is a probability function.” (p. 70) Moreover,

In the experiments about atomic events we have to do with things and facts, with phenomena that are just as real as any phenomena in daily life. But the atoms or the elementary particles themselves are not as real; they form a world of potentialities or possibilities rather than one of things or facts. (p. 186)

This sounds downright crazy to most people. The idea that the solid objects of our everyday experience are made up not of smaller solid parts but of probabilities and potentialities seems bizarre. However, Heisenberg noted that observed events at the subatomic level did seem to fit the interpretation of reality given by the Greek philosopher Aristotle over 2000 years ago. According to Aristotle, reality was a combination of matter and form, but matter was not a set of solid particles but rather potential, an indefinite possibility or power that became real only when it was combined with form to make actual existing things. (pp. 147-49) To provide some rough analogies: a supply of wood can potentially be a table or a chair or a house — but it must be combined with the right form to become actually a table or a chair or a house. Likewise, a block of marble is potentially a statue of a man or a woman or an animal, but only when a sculptor shapes the marble into that particular form does the statue become actual. In other words, actuality (reality) equals potential plus form.

According to Heisenberg, Aristotle’s concept of potential was roughly equivalent to the concept of “energy” in modern physics, and “matter” was energy combined with form.

All the elementary particles are made of the same substance, which we may call energy or universal matter; they are just different forms in which the matter can appear.

If we compare this situation with the Aristotelian concepts of matter and form, we can say that the matter of Aristotle, which is mere ‘potential,’ should be compared to our concept of energy, which gets into ‘actuality’ by means of the form, when the elementary particle is created. (p. 160)

In fact, all modern physicists agree that matter is simply a form of energy (and vice versa). In the earliest stages of the universe, matter emerged out of energy, and that is how we got atoms in the first place. There is nothing inherently “solid” about energy, but energy can be transformed into particles, and particles can be transformed back into energy. According to Heisenberg, “Energy is in fact the substance from which all elementary particles, all atoms and therefore all things are made. . . .” (p. 63)

So what exactly is energy? Oddly enough, physicists have a hard time stating exactly what energy is. Energy is usually defined as the “capacity to do work” or the “capacity to cause movement,” but these definitions remain somewhat vague, and there is no specific mechanism or form that physicists can point to in order to describe energy. Gottfried Leibniz, who developed the first formula for measuring energy, referred to energy as vis viva or “living force,” a concept which is anthropomorphic and nearly theological.  In fact, there are so many different types of energy and so many different ways to measure these types of energy that many physicists are inclined to the view that energy is not a substance but just a mathematical abstraction. According to the great American physicist Richard Feynman, “It is important to realize that in physics today, we have no knowledge of what energy ‘is.’ We do not have a picture that energy comes in little blobs of a definite amount. It is not that way. It is an abstract thing in that it does not tell us the mechanism or the reason for the various formulas.” The only reason physicists know that energy exists is that they have performed numerous experiments over the years and have found that however energy is measured, the amount of energy in an isolated system always remains the same — energy can only be transformed, it can neither be created nor destroyed. Energy in itself has no form, and there is no such thing as “pure energy.” Oh, and energy is relative too — you have to specify the frame of reference when measuring energy, because the position and movement of the observer matters. For example, if you move toward a photon, its energy in that frame of reference will be greater; if you move away from a photon, its energy will be less.

In fact, the melding of relativity theory with quantum physics has further undermined materialism and our common sense notions of what it is to be “real.”  A 2013 article in Scientific American by Dr. Meinard Kuhlmann of Bielefeld University in Germany, “What is Real,” lays out some of these paradoxes of existence at the subatomic level. For example, scientists can create a vacuum in the laboratory, but when a Geiger counter is connected to the vacuum container, it will detect matter. In addition, a vacuum will contain no particles according to an observer at rest, but will contain many particles from the perspective of an accelerating observer! Kuhlmann concludes: “If the number of particles is observer-dependent, then it seems incoherent to assume that particles are basic. We can accept many features to be observer-dependent but not the fact of how many basic building blocks there are.”

So, if the smallest parts of reality are not tiny material objects, but potentialities and probabilities, which vary according to the observer, then how do we get what appears to be solid material objects, from rocks to mountains to trees to houses and cars? According to Kuhlmann, some philosophers and scientists say that we need to think about reality as consisting entirely of relations. In this view, subatomic particles have no definite position in space until they are observed because determining position in space requires a relation between an observer and observed. Position is mere potential until there is a relation. You may have heard of the old puzzle, “If a tree falls in a forest, and no one is around to hear it, does it make a sound?” The answer usually given is that sound requires a perceiver who can hear, and it makes no sense to talk about “sound” without an observer with functional ears. In the past, scientists believed that if objects were broken down into their smallest parts, we would discover the foundation of reality; but in the new view, when you break down larger objects into their smallest parts, you are gradually taking apart the relations that compose the object, until what you have left is potential. It is the relations between subatomic particles and observers that give us solidity.

Another interpretation Kuhlmann discusses is that the fundamental basis of reality is bundles of properties. In this view, reality consists not of objects or things, but of properties such as shape, mass, color, position, velocity, spin, etc. We think of things as being fundamentally real and properties as being attributes of things. But in this new view, properties are fundamentally real and “things” are what we get when properties are bundled together in certain ways. For example, we recognize a red rubber ball as being a red rubber ball because our years of experience and learning in our culture have given us the conceptual category of “red rubber ball.” An infant does not have this conceptual category, but merely sees the properties: the roundness of the shape, the color red, the elasticity of the rubber. As the infant grows up, he or she learns that this bundle of properties constitutes the “thing” known as a red rubber ball; but it is the properties that are fundamental, not the thing. So when scientists break down objects into smaller and smaller pieces in their particle accelerators, they are gradually taking apart the bundles of properties until the particles no longer even have a definite position in space!

So whether we thing of reality as consisting of relations or bundles of properties, there is nothing “solid” underlying everything.  Reality consists of properties or qualities that emerge out of potential, and then bundle together in certain ways. Over time, some bundles or relations come apart, and new bundles or relations emerge. Finally, in the evolution of life, there is an explosion of new bundles of properties, with some bundles containing a staggering degree of organizational complexity, built incrementally over millions of years. The proper interpretation of this organizational complexity will be discussed in a subsequent post.

 

How Random is Evolution?

Man is the product of causes which had no prevision of the end they were achieving . . . his origin, his growth, his hopes and fears, his loves and his beliefs, are but the outcome of accidental collocations of atoms. . . .” – Bertrand Russell

In high school or college, you were probably taught that human life evolved from lower life forms, and that evolution was a process in which random mutations in DNA, the genetic code, led to the development of new life forms. Most mutations are harmful to an organism, but some mutations confer an advantage to an organism, and that organism is able to flourish and pass down its genes to subsequent generations –hence, “survival of the fittest.”

Many people reject the theory of evolution because it seemingly removes the role of God in the creation of life and of human beings and suggests that the universe is highly disordered. But all available evidence suggests that life did evolve, that the world and all of its life was not created in six days, as the Bible asserted. Does this mean that human life is an accident, that there is no larger intelligence or purpose to the universe?

I will argue that although evolution does indeed suggest that the traditional Biblical view of life’s origins are incorrect, people have the wrong impression of (1) what randomness in evolution means and (2) how large the role of randomness is in evolution. While it is true that individual micro-events in evolution can be random, these events are part of a larger system, and this system can be highly ordered even if particular micro-events are random. Moreover, recent research in evolution indicates that in addition to random mutation, organisms can respond to environmental factors by changing in a manner that is purposive, not random, in a direction that increases their ability to thrive.

____________________

So what does it mean to say that something is “random”? According to the Merriam-Webster dictionary, “random” means “a haphazard course,” “lacking a definite plan, purpose, or pattern.” Synonyms for “random” include the words “aimless,” “arbitrary,” and “slapdash.” It is easy to see why when people are told that evolutionary change is a random process, that many reject the idea outright. This is not necessarily a matter of unthinking religious prejudice. Anyone who has examined nature and the biology of animals and human beings can’t help but be impressed by how enormously complex and precisely ordered these systems are. The fact of the matter is that it is extraordinarily difficult to build and maintain life; death and nonexistence is relatively easy. But what does it mean to lack “a definite plan, purpose, or pattern”? I contend that this definition, insofar as it applies to evolution, only refers to the particular micro-events of evolution when considered in isolation and not the broader outcome or the sum of the events.

Let me illustrate what I mean by presenting an ordinary and well-known case of randomness: rolling a single die. A die is a cube with six sides and a number, 1-6, on each side. The outcome of any roll of the die is random and unpredictable; if you roll a die once, the outcome will be unpredictable. If you roll a die multiple times, each outcome, as well as the particular sequence of outcomes, will be unpredictable. But if you look at the broader, long-term outcome after 1000 rolls, you will see this pattern: an approximately equal number of ones, twos, threes, fours, fives, and sixes will come up, and the average value of all events will be 3.5.

Why is this? Because the die itself is a highly-precise ordered system. Each die must have equally sided lengths on all sides and an equal distribution of density/weight throughout in order to make the outcome truly unpredictable, otherwise a gambler who knows the design of the die may have an edge. One die manufacturer brags, “With tolerances less than one-third the thickness of a human hair, nothing is left to chance.” [!] In fact, a common method of cheating with dice is to shave one or more sides or insert a weight into one end of the die. This results in a system that is also precisely ordered, but in a way that makes certain outcomes more likely. After a thousand rolls of the die, one or more outcomes will come up more frequently, and this pattern will stand out suspiciously. But the person who cheated by tilting the odds in one direction may have already escaped with his or her winnings.

If you look at how casinos make money, it is precisely by structuring the rules of each game to give the edge to the casino that allows them to make a profit in the long run. The precise outcome of each particular game is not known with certainty, the particular sequence of outcomes is not known, and the balance sheet of the casino at the end of the night cannot be predicted. But there is definitely a pattern: in the long run, the sum of events results in the casino winning and making a profit, while the players as a group will lose money. When casinos go out of business, it is generally because they can’t attract enough customers, not because they lose too many games.

The ability to calculate the sum of a sequence of random events is the basis of the so-called “Monte Carlo” method in mathematics. Basically, the Monte Carlo method involves setting certain parameters, selecting random inputs until the number of inputs is quite large, and then calculating the final result. It’s like throwing darts at a dartboard repeatedly and examining the pattern of holes. One can use this method with 30,000 randomly plotted points to calculate the value of pi to within 0.07 percent.

So if randomness can exist within a highly precise order, what is the larger order within which the random mutations of evolution operate? One aspect of this order is the bonding preferences of atoms, which are responsible not only for shaping how organisms arise, but how organisms eventually develop into astonishingly complex and wondrous forms. Without atomic bonds, structures would fall apart as quickly as they came together, preventing any evolutionary advances. The bonding preferences of atoms shape the parameters of development and result in molecular structures (DNA, RNA, and proteins) that retain a memory or blueprint, so that evolutionary change is incremental. The incremental development of organisms allows for the growth of biological forms that are eventually capable of running at great speeds, flying long distances, swimming underwater, forming societies, using tools, and, in the case of humans, building technical devices of enormous sophistication.

The fact of incremental change that builds upon previous advances is a feature of evolution that makes it more than a random process. This is illustrated by biologist Richard Dawkins’ “weasel program,” a computer simulation of how evolution works by combining random micro-events with the retaining of previous structures so that over time a highly sophisticated order can develop. The weasel program is based on the “infinite monkey theorem,” the fanciful proposal that an infinite number of monkeys with an infinite number of typewriters would eventually produce the works of Shakespeare. This theorem has been used to illustrate how order could conceivably emerge from random and mindless processes. What Dawkins did, however, was write a computer program to write just one sentence from Shakespeare’s Hamlet: “Methinks it is like a weasel.” Dawkins structured the computer program to begin with a single random sentence, reproduce this sentence repeatedly, but add random errors (“mutations”) in each “generation.” If the new sentence was at least somewhat closer to the target phrase “Methinks it is like a weasel,” that sentence became the new parent sentence. In this way, subsequent generations would gradually assume the form of the correct sentence. For example:

Generation 01: WDLTMNLT DTJBKWIRZREZLMQCO P
Generation 02: WDLTMNLT DTJBSWIRZREZLMQCO P
Generation 10: MDLDMNLS ITJISWHRZREZ MECS P
Generation 20: MELDINLS IT ISWPRKE Z WECSEL
Generation 30: METHINGS IT ISWLIKE B WECSEL
Generation 40: METHINKS IT IS LIKE I WEASEL
Generation 43: METHINKS IT IS LIKE A WEASEL

The Weasel program is a great example of how random change can produce order over time, BUT only under highly structured conditions, with a defined goal and a retaining of those steps toward that goal. Without these conditions, a computer program randomly selecting letters would be unlikely to produce the phrase “Methinks it is like a weasel” in the lifetime of the universe, according to Dawkins!

It is the retaining of most evolutionary advances, while allowing a small degree of randomness, that allows evolution to produce increasingly complex life forms. Reproduction has some random elements in it, but is actually remarkably precise and effective in producing offspring at least roughly similar to their parents. It is not the case that a female human is equally as likely to give birth to a dog, a pig, or a chicken as to give birth to a human. It would be very strange indeed if evolution was that random!

But there is even more to the story of evolution.

Recent research in biology has indicated that there are factors in nature that tend to push development in certain directions favorable to an organism’s flourishing. Even if you imagine evolution in nature as a huge casino, with a lot of random events, scientists have discovered that the players are strategizing: they are increasing or decreasing their level of gambling in response to environmental conditions, shaving the dice to obtain more favorable outcomes, and cooperating with each other to cheat the casino!

For example, it is now recognized among biologists that a number of microorganisms are capable to some extent of controlling their rate of mutation, increasing the rate of mutation during times of environmental challenge and stress, and suppressing the rate of mutation during times of peace and abundance. As a result of accelerated mutations, certain bacteria can acquire the ability to utilize new sources of nutrition, overcoming the threat of extinction arising from the depletion of its original food source. In other words, in response to feedback from the environment, organisms can decide to try to preserve as much of their genome as they can or experiment wildly in the hope of finding a solution to new environmental challenges.

The organism known as the octopus (a cephalopod) has a different strategy: it actively suppresses mutation in DNA and prefers to recode its RNA in response to environmental challenges. For example, octopi in the icy waters of the Antarctic recode their RNA in order to keep their nerves firing in cold water. This response is not random but directly adaptive. RNA recoding in octopi and other cephalopods is particularly prevalent in proteins responsible for the nervous system, and it is believed by scientists that this may explain why octopi are among the most intelligent creatures on Earth.

The cephalopods are somewhat unusual creatures, but there is evidence that other organisms can also adapt in a nonrandom fashion to their environment by employing molecular factors that suppress or activate the expression of certain genes — the study of these molecular factors is known as “epigenetics.” For example, every cell in a human fetus has the same DNA, but this DNA can develop into heart tissue, brain tissue, skin, liver, etc., depending on which genes are expressed and which genes are suppressed. The molecular factors responsible for gene expression are largely proteins, and these epigenetic factors can result in heritable changes in response to environmental conditions that are definitely not random.

The water flea, for example, can come in different variations, despite the same DNA, in response to the environmental conditions of the mother flea. If the mother flea experienced a large predator threat, the children of that flea would develop a spiny helmet for protection; otherwise the children would develop normal helmet-less heads. Studies have found that in other creatures, a particular diet can turn certain genes on or off, modifying offspring without changing DNA. In one study, mice that exercised not only enhanced their brain function, their children had enhanced brain function as well, though the effect only lasted one generation if exercise stopped. The Mexican cave fish once had eyes, but in its new dark environment, epigenetics has been responsible for turning off the genes responsible for eye development; its original DNA has been unchanged. (The hypothesized reason for this is that organisms tend to discard traits that are not needed in order to conserve energy.)

Recent studies of human beings have uncovered epigenetic adaptations that have allowed humans to flourish in such varied environments as deserts, jungles, and polar ice. The Oromo people of Ethiopia, recent settlers to the highlands of that country, have had epigenetic changes to their immune system to cope with new microbiological threats. Other populations in Africa have genetic mutations that have the twin effect of protecting against malaria but causing sickle cell anemia — recently it has been found that these mutations are being silenced in the face of declining malarial threats.  Increasingly, scientists are recognizing the large role of epigenetics in the evolution of human beings:

By encouraging the variations and adaptability of our species, epigenetic mechanisms for controlling gene expression have ensured that humanity could survive and thrive in any number of environments. Epigenetics is a significant part of the reason our species has become so adaptable, a trait that is often thought to distinguish us from what we often think of as lesser-evolved and developed animals that we inhabit this earth with. Indeed, it can be argued that epigenetics is responsible for, and provided our species with, the tools that truly made us unique in our ability to conquer any habitat and adapt to almost any climate. (Bioscience Horizons, 1 January 2017)

In fact, despite the hopes of scientists everywhere that the DNA sequencing of the human genome would provide a comprehensive biological explanation of human traits, it has been found that epigenetics may play a larger role in the complexity of human beings than the number of genes. According to one researcher, “[W]e found out that the human genome is probably not as complex and doesn’t have as many genes as plants do. So that, then, made us really question, ‘Well, if the genome has less genes in this species versus this species, and we’re more complex potentially, what’s going on here?'”

One additional nonrandom factor in evolution should be noted: the role of cooperation between organisms, which may even lead to biological mergers that create a new organism. Traditionally, evolution has been thought of primarily as random changes in organisms followed by a struggle for existence between competing organisms. It is a dark view of life. But increasingly, biologists have discovered that cooperation between organisms, known as symbiosis, also plays a role in the evolution of life, including the evolution of human beings.

Why was the role of cooperation in evolution overlooked until relatively recently? A number of biologists have argued that the society and culture of Darwin’s time played a significant role in shaping his theory — in particular, Adam Smith’s book The Wealth of Nations. In Smith’s view, the basic unit of economics was the self-interested individual on the marketplace, who bought and sold goods without any central planner overseeing his activities. Darwin essentially adopted this view and applied it to biological organisms: as businesses competed on the marketplace and flourished or died depending on how efficient they were, so too did organisms struggle against each other, with only the fittest surviving.

However, even in the late nineteenth century, a number of biologists noted cases in nature in which cooperation played a prominent role in evolution. In the 1880s, the Scottish biologist Patrick Geddes proposed that the reason the giant green anemone contained algal (algae) cells as well as animal cells was because of the evolution of a cooperative relationship between the two types of cells that resulted in a merger in which the alagal cells were merged into the animal flesh of the anemone. In the latter part of the twentieth century, biologist Lynn Margulis carried this concept further. Margulis argued that the most fundamental building block of advanced organisms, the cell, was the result of a merger between more primitive bacteria billions of years ago. By merging, each bacterium lent a particular biological advantage to the other, and created a more advanced life form. This theory was regarded with much skepticism at the time it was proposed, but over time it became widely accepted. The traditional picture of evolution as one in which new species diverge from older species and compete for survival has had to be supplemented with the picture of cooperative behavior and mergers. As one researcher has argued, “The classic image of evolution, the tree of life, almost always exclusively shows diverging branches; however, a banyan tree, with diverging and converging branches is best.”

More recent studies have demonstrated the remarkable level of cooperation between organisms that is the basis for human life. One study from a biologist at the University of Cambridge has proposed that human beings have as many as 145 genes that have been borrowed from bacteria, other single-celled organisms, and viruses. In addition, only about half of the human body is made up of human cells — the other half consists of trillions of microbes and quadrillions of viruses that largely live in harmony with human cells. Contrary to the popular view that microbes and viruses are threats to human beings, most of these microbes and viruses are harmless or even beneficial to humans. Microbes are essential in digesting food and synthesizing vitamins, and even the human immune system is partly built and partly operated by microbes! If, as one biologist has argued, each human being is a “society of cells,” it would be equally valid to describe a human being as a “society of cells and microbes.”

Is there randomness in evolution? Certainly. But the randomness is limited in scope, it takes place within a larger order which preserves incremental gains, and it provides the experimentation and diversity organisms need to meet new challenges and new environments. Alongside this randomness are epigenetic adaptations that turn genes on or off in response to environmental influences and the cooperative relations of symbiosis, which can build larger and more complex organisms. These additional facts do not prove the existence of a creator-God that oversees all of creation down to the most minute detail; but they do suggest a purposive order within which an astonishing variety of life forms can emerge and grow.