What is “Transcendence”?

You may have noticed number of writings on religious topics that make reference to “transcendence” or “the transcendent.” However, the word “transcendence” is usually not very well defined, if it is defined at all. The Catechism of the Catholic Church makes several references to transcendence, but it’s not completely clear what transcendence means other than the infinite greatness of God, and the fact that God is “the inexpressible, the incomprehensible, the invisible, the ungraspable.” For those who value reason and precise arguments, this vagueness is unsatisfying. Astonishingly, the fifteen volume Catholic Encyclopedia (1907-1914) did not even have an entry on “transcendence,” though it did have an entry on “transcendentalism,” a largely secular philosophy with a variety of schools and meanings. (The New Catholic Encyclopedia in 1967 finally did have an entry on “transcendence.”)

The Oxford English Dictionary defines “transcendence” as “the action or fact of transcending, surmounting, or rising above . . . ; excelling, surpassing; also the condition or quality of being transcendent, surpassing eminence or excellence. . . .” The reference to “excellence” is probably key to understanding what “transcendence” is. In my previous essay on ancient Greek religion, I pointed out that areté, the Greek word for “excellence,” was a central idea of Greek culture and one cannot fully appreciate the ancient Greek pagan religion without recognizing that Greek devotion to excellence was central to their religion. The Greeks depicted their gods as human, but with perfect physical forms. And while the behavior of the Greek gods was often dubious from a moral standpoint, the Greek gods were still regarded as the givers of wisdom, order, justice, love, and all the institutions of human civilization.

The odd thing about transcendence is that because it seems to refer to a striving for an ideal or a goal that goes above and beyond an observed reality, transcendence has something of an unreal quality. It is easy to see that rocks and plants and stars and animals and humans exist. But the transcendent cannot be directly seen, and one cannot prove the transcendent exists. It is always beyond our reach.

Theologians refer to transcendence as one of the two natures of God, the other being “immanence.” Transcendence refers to the higher nature of God and immanence refers to God as He currently works in reality, i.e., the cosmic order. The division between those who believe in a personal God and those who believe in an impersonal God reflects the division between the transcendent and immanent view of God. It is no surprise that most scientists who believe in God tend more to the view of an impersonal God, because their whole life is dedicated to examining the reality of the cosmic order, which seems to operate according to a set of rules rather than personal supervision.

Of course, atheists don’t even believe in an impersonal God. One famous atheist, Sigmund Freud, argued that religion was an illusion, a simple exercise in “wish fulfillment.” According to Freud, human beings desired love, immortality, and an end to suffering and pain, so they gravitated to religion as a solution to the inevitable problems and limitations of mortal life. Marxists have a similar view of religion, seeing promises of an afterlife as a barrier to improving actual human life.

Another view was taken by the American philosopher George Santayana, whose book, Reason in Religion, is one of the very finest books ever written on the subject of religion. According to Santayana, religion was an imaginative and poetic interpretation of life; religion supplied ideal ends to which human beings could orient their lives. Religion failed only when it attributed literal truth to these imaginative ideal ends. Thus religions should be judged, according to Santayana, according to whether they were good or bad, not whether they were true or false.

This criteria for judging religion would appear to be irrational, both to rationalists and to those who cling to faith. People tend to equate worship of God with belief in God, and often see literalists and fundamentalists as the most devoted of all. But I would argue that worship is the act of submission to ideal ends, which hold value precisely because they are higher than actually existing things, and therefore cannot pass traditional tests of truth, which call for a correspondence to reality.

In essence, worship is submission to a transcendent Good. We see good in our lives all the time, but we know that the particular goods we experience are partial and perishable. Freud is right that we wish for goods that cannot be acquired completely in our lives and that we use our imaginations to project perfect and eternal goods, i.e. God and heaven. But isn’t it precisely these ideal ends that are sacred, not the flawed, perishable things that we see all around us? In the words of Santayana,

[I]n close association with superstition and fable we find piety and spirituality entering the world. Rational religion has these two phases: piety, or loyalty to necessary conditions, and spirituality, or devotion to ideal ends. These simple sanctities make the core of all the others. Piety drinks at the deep, elemental sources of power and order: it studies nature, honours the past, appropriates and continues its mission. Spirituality uses the strength thus acquired, remodeling all it receives, and looking to the future and the ideal. (Reason in Religion, Chapter XV)

People misunderstand ancient Greek religion when they think it is merely a set of stories about invisible personalities who fly around controlling nature and intervening in human affairs. Many Greek myths were understood to be poetic creations, not history; there were often multiple variations of each myth, and people felt free to modify the stories over time, create new gods and goddesses, and change the functions/responsibilities of each god. Rational consistency was not expected, and depictions of the appearance of any god or goddess in statues or painting could vary widely. For the Greeks, the gods were not just personalities, but transcendent forms of the Good. This is why Greek religion also worshipped idealized ends and virtues such as “Peace,” “Victory,” “Love,” “Democracy,” “Health,” “Order,” and “Wealth.” The Greeks represented these idealized ends and virtues as persons (usually females) in statues, built temples for them, and composed worshipful hymns to them. In fact, the tendency of the Greeks to depict any desired end or virtue as a person was so prevalent, it is sometimes difficult for historians to tell if a particular statue or temple was meant for an actual goddess/god or was a personified symbol. For the ancient Greeks, the distinction may not have been that important, for they tended to think in highly poetic and metaphorical terms.

This may be fine as an interpretation of religion, you may say, but does it make sense to conceive of imaginative transcendent forms as persons or spirits who can actually bring about the goods and virtues that we seek? Is there any reason to think that prayer to Athena will make us wise, that singing a hymn to Zeus will help us win a war, or that a sacrifice at the temples of “Peace” or “Health” will bring us peace or health? If these gods are not powerful persons or spirits that can hear our prayers or observe our sacrifices, but merely poetic representations or symbols, then what good are they and what good is worship?

My view is this: worship and prayer do not affect natural causation. Storms, earthquakes, disease, and all the other calamities that have afflicted humankind from the beginning are not affected by prayer. Addressing these calamities requires research into natural causation, planning, human intervention, and technology. What worship and prayer can do, if they are directed at the proper ends, is help us transcend ourselves, make ourselves better people, and thereby make our societies better.

In a previous essay, I reviewed the works of various physicists, who concluded that reality consists not of tiny, solid objects but rather bundles of properties and qualities that emerge from potentiality to actuality. I think this dynamic view of reality is what we need in order to understand the relationship between the transcendent and the actual. We worship the transcendent not because we can prove it exists, but because the transcendent is always drawing us to a higher life, one that excels or supersedes who we already are. The pantheism of Spinoza and Einstein is more rational than traditional myths that attributed natural events to a personal God who created the world in six days and subsequently punished evil by causing natural disasters. But pantheism is ultimately a poor basis for religion. What would be the point of worshipping the law of gravity or electromagnetism or the elements in the periodic table? These foundational parts of the universe are impressive, but I would argue that aspiring to something higher is fundamental not only to human nature but to the universe itself. The universe, after all, began simply with a concentrated point of energy; then space expanded and a few elements such as hydrogen and helium formed; only after hundreds of millions of years did the first stars, planets, and other elements necessary for life began to emerge.

Worshipping the transcendent orients the self to a higher good, out of the immediate here-and-now. And done properly, worship results in worthy accomplishments that improve life. We tend to think of human civilization as being based on the rational mastery of a body of knowledge. But all knowledge began with an imagined transcendent good. The very first lawgivers had no body of laws to study; the first ethicists had no texts on morals to consult; the first architects had no previous designs to emulate; the first mathematicians had no symbols to calculate with; the first musicians had no composers to study. All our knowledge and civilization began with an imagined transcendent good. This inspired experimentation with primitive forms; and then improvement on those initial primitive efforts. Only much later, after many centuries, did the fields of law, ethics, architecture, mathematics, and music become a body of knowledge requiring years of study. So we attribute these accomplishments to reason, forgetting the imaginative leaps that first spurred these fields.

 

Are Human Beings Just Atoms?

In a previous essay on materialism, I discussed the bizarre nature of phenomena on the subatomic level, in which particles have no definite position in space until they are observed. Referencing the works of several physicists and philosophers, I put forth the view that reality consists not of tiny, solid objects but rather bundles of properties and qualities that emerge from potentiality to actuality. In this view, when one breaks down reality into smaller and smaller parts, one does not reach the fundamental units of matter; rather, one is gradually unbundling properties and qualities until the smallest objects no longer even have a definite position in space!

Why is this important? One reason is that the enormous prestige and accomplishments of science have sometimes led us down the wrong path in properly describing and interpreting reality. Science excels at advancing our knowledge of how things work, by breaking down wholes into component parts and manipulating those parts into better arrangements that benefit humanity. This is how we got modern medicine, computers, air conditioning, automobiles, and space travel. However, science sometimes falls short in properly describing and interpreting reality, precisely because it focuses more on the parts than the wholes.

This defect in science becomes particularly glaring when certain scientists attempt to describe what human beings are like. All too often there is a tendency to reduce humans to their component parts, whether these parts are chemical elements (atoms), chemical compounds (molecules), or the much larger molecules known as genes. However, while these component parts make up human beings, there are properties and qualities in human beings that cannot be adequately described in terms of these parts.

Marcelo Gleiser, a physicist at Dartmouth College, argues that “life is the property of a complex network of biochemical reactions . . . a kind of hungry chemistry that is able to duplicate itself.” Biologist Richard Dawkins claims that humans are “just gene machines,” and “living organisms and their bodies are best seen as machines programmed by the genes to propagate those very same genes,” though he qualifies his statement by noting that “there is a very great deal of complication, and indeed beauty in being a gene machine.” Philosopher Daniel Dennett claims that human beings are “moist robots” and the human mind is a collection of computer-like information processes which happen to take place in carbon-based rather than silicon-based hardware.

Now it is true that human beings are composed of atoms that are the basis of chemicals and molecules, that are the basis of chemical compounds, such as genes. The issue, however, is whether describing the parts that compose a human being is the same as describing the whole human being. Yes, human beings are composed of atoms of oxygen, carbon, hydrogen, nitrogen, calcium, and phosphorous. But these atoms can be found in many, many places throughout the universe, in varying quantities and combinations, and they do not have human qualities unless and until they are organized in just the right way. Likewise, genes are ubiquitous in life forms ranging from mammals to lizards to plants to bacteria. Even viruses have genes, though most scientists argue that viruses are not true life forms because they need a host to reproduce. Nevertheless, while human beings share a very few properties and qualities with bacteria and viruses, humans clearly have many properties and qualities that the lower life forms do not.

In fact, recognizing the very difference between life and death can be lost by excessive focus on atoms and molecules. Consider the following: an emergency room doctor treats a patient suffering from a heart attack. Despite the physician’s best efforts, despite all of the doctor’s training and knowledge, the patient dies on the table. So what is the difference between the patient that has died and the patient as he was several hours ago? The quantity and types of atoms composing the body are approximately the same as when the patient was alive. So what has changed? Obviously, the properties and qualities expressed by the organization of the atoms in the human being has changed. The heart no longer supplies blood to the rest of the body, the lungs no longer supply oxygen, the brain no longer has electrical activity, the human being no longer has the ability to run or walk or jump or talk or think or love. Atoms have to be organized in an extremely precise manner in order for these properties and qualities to emerge, and this organization has been lost. So if we are really going to accurately describe what a human being is, we have to refer not just to the atoms, but to the overall organization or form.

The issue of form is what separates the ancient Greek philosophers Democritus and Plato. Both philosophers believed that the universe and everything in it was composed of atoms; but Democritus thought that nothing existed but atoms and the void (space), whereas Plato believed that atoms were arranged by a creator, who, being essentially good, used ideal forms as a blueprint. Contrary to the views of Judaism, Christianity, and Islam, however, Plato believed that the creator was not omnipotent, and was forced to work with imperfect matter to do the best job possible, which is why most created objects and life forms were imperfect and fell short of the ideal forms.

Democritus would no doubt dismiss Plato’s ideal forms as being unreal — after all, forms are not something solid, so how can anything that is not solid, not made of material, exist at all? But as I’ve pointed out, the atoms that compose the human body are found everywhere, whereas actual, living human beings have these same atoms organized in a precise, particular form. In other words, in order to understand anything, it is not enough to break it down into parts and study the parts; one has to look at the whole. The properties and qualities of a living human being, as a whole, definitely do exist, or we would not know how to distinguish a living human being from a dead human being or any other existing thing composed of the same atoms.

The debate between Democritus and Plato points to a difference in ways of knowing that persist to this day: analytic knowledge and holistic knowledge. Analytic knowledge is pursued by science and reason; holistic knowledge is pursued by religion, art, and the humanities. The prestige of science and its technological accomplishments has elevated analytic understanding above all other forms of knowledge, but we remain lost without holistic understanding.

What precisely is “analytic knowledge”? The word “analyze” means “to study or determine the nature and relationship of the parts (of something) by analysis.” Synonyms for “analyze” include “break down,” “cut,” “deconstruct,” and “dissect.” In fact, the word “analysis” is derived from the New Latin word analyein, meaning “to break up.” Analysis is an extremely valuable tool and is responsible for human progress in all sorts of areas. But the knowledge derived from analysis is primarily a description and guide to how things work. It reduces knowledge of the whole to knowledge of the parts, which is fine if you want to take something apart and put it back together. But the knowledge of how things work is not the same as the knowledge of what things are as a whole, what qualities and properties they have, and the value of those qualities and properties. This latter knowledge is holistic knowledge.

The word “holism,” based on the ancient Greek word for “whole” (holos), was coined in the early twentieth century in order to promote the view that all systems, living or not, should be viewed as wholes and not just as a collection of parts or the sum of parts. It’s no accident that the words “whole,” “heal,” healthy,” and “holy” are linguistically related. The problems of sickness, malnutrition, and injury were well-known to the ancients, and it was natural for them to see these problems as a disturbance to the whole human being, rendering a person incomplete and missing certain vital functions. Wholeness was an ideal end, which made wholeness sacred (holy) as well. (For an extended discussion of analytic/reductionist knowledge vs. holistic knowledge, see this post.)

Holistic knowledge is not just about ideal physical health. It’s about ideal forms in all aspects, including the qualities we associate with human beings we admire: wisdom, strength, beauty, courage, love, kindness. As mistaken as religions have been in understanding natural causation, it is the devotion to ideal forms that is really the essence of religion. The ancient Greeks worshipped excellence, as embodied in their gods; Confucians were devoted to family ties and duties; the Jews submitted themselves to the laws of the one God; Christians devoted themselves to the love of God, embodied in Christ.

Holistic knowledge provides no guidance as to how to conduct surgery or build a computer or launch a rocket; but it does provide insight into the ethics of medicine, the desirability or hazards of certain types of technology, and the proper ends of human beings. All too often, contemporary secular societies expect new technologies to improve human lives and pay no heed to ideal human forms, on the assumption that ideal forms are a fantasy. Then we are shocked when the new technologies are abused and not only bring out the worst in human nature but enhance the power of the worst.

Materialism: There’s Nothing Solid About It!

[I]n truth there are only atoms and the void.” – Democritus

In the ancient Greek transition from mythos to logos, stories about the world and human lives being shaped by gods and goddesses gradually came to be replaced by new explanations from philosophers. Among these philosophers were the “atomists,” including Leucippus and Democritus. Later, the Roman philosopher and poet Lucretius expounded an atomist view of the universe. The atomists were regarded as being among the first atheists and the first materialists — if they did acknowledge the existence of the gods (probably due to public pressures), they argued that the gods had no active influence on the world. Although the atomists’ understanding of the atom was primitive and far from our modern scientific understanding — they did not possess particle accelerators, after all — they were remarkably farsighted about the actual workings of nature. To this day, the symbol of the American Atheists is a depiction of the atom:

However, the ancient atomists’ conception of how the universe is constructed, with solid particles of matter combining to make complex organizational structures, has become problematic given the findings of atomic physics in the past hundred years. Increasingly, scientists have found that reality consists not of solid matter, but of organizational principles and qualities that give us the impression of solidity. And while this new view does not restore the Greek gods to prominence, it does raise questions about how we ought to understand and interpret reality.

_________________________

 

Leucippus and Democritus lived in the fifth century BC. While it is difficult to disentangle their views because of gaps in the historical record, both philosophers argued that all existence was ultimately based on tiny, indestructible particles (“atoms”) and empty space. While not explicitly denying the existence of the gods, the philosophy of Leucippus and Democritus made it clear that the gods had no significant role in the creation or maintenance of the universe. Rather, atoms existed eternally and moved randomly in empty space, until they collided and began to form larger units, leading to the growth of stars and planets and various life forms. The differences between types of matter, such as iron, water, and air were due to differences in the atoms that composed this matter. Atoms could join with each other because of a variety of hooks or sockets in the atoms that allowed for attachments.

Hundreds of years later, the Roman philosopher Lucretius expanded upon atomist theory in his poem De rerum natura (On the Nature of Things). Lucretius explained that the universe consisted of an infinite number of atoms moving and combining under the influence of laws and random chance, not the decisions of gods. Lucretius also denied the existence of an afterlife, and argued that human beings should not fear death. Although Lucretius was not explicitly atheistic, his work was perceived by Christians in the Middle Ages as being essentially atheistic in outlook and was denounced for that reason.

Not all of the ancient philosophers, even those most committed to reason, accepted the atomist view of existence. It is reported that Plato hated Democritus and wished that his books be burned. Plato did accept that there were different types of matter composing the world, but posited that the particles were perfect triangles, brought together in various combinations. In addition, these triangles were guided by a cosmic intelligence, and were not colliding randomly without purpose. For Plato, the ultimate reality was the Good, and the things we saw all around us were shadows of perfect, ideal forms that were the blueprint for the less-perfect existing things.

For two thousand years after Democritus, atomism as a worldview remained a minority viewpoint — after all, religion was still an important institution in societies, and no one had yet seen or confirmed the existence of atoms. But by the nineteenth century, advances in science had accumulated to the point at which atomism became increasingly popular as a view of reality. No longer was there a need for God or gods to explain nature and existence; atoms and laws were all that were needed. The philosophy of materialism — the view that matter is the fundamental substance in nature and that all things, including mental aspects and consciousness, are results of material interactions — became increasingly prevalent. The political-economic ideology of communism, which at one time ruled one-third of the world’s population, was rooted in materialism. In fact, Karl Marx wrote his doctoral dissertation on Democritus’ philosophy of nature, and Vladimir Lenin authored a philosophical book on materialism, including chapters on physics, that was mandatory reading in the higher education system of the Soviet Union.

As physicists conducted increasingly sophisticated experiments on the smallest parts of nature, however, certain results began to challenge the view that atoms were solid particles of matter. For one thing, it was found that atoms themselves were not solid throughout but consisted of electrons orbiting around an extremely small nucleus of protons and neutrons. The nucleus of an atom is actually 100,000 times smaller than the entire atom, even though the nucleus contains almost the entire mass of the atom. As one article has put it, “if the nucleus were the size of a peanut, the atom would be about the size of a baseball stadium.” For that reason, some have concluded that all “solid” objects in the universe, including human beings, are actually about 99.9999999 percent empty space, because of the empty space in the atoms! Others respond that in fact it is not “empty space” in the atom, but rather a “field” or “wave function” — and here it gets confusing.

In fact, subatomic particles do not have a precise location in space; they behave like a fuzzy wave until they interact with an observerand then the wave “collapses” into a particle. The bizarreness of this activity confounded the brightest scientists in the world, and to this day, there are arguments among scientists about what is “really” going on at the subatomic level.

The currently dominant interpretation of subatomic physics, known as the “Copenhagen interpretation,” was developed by the physicists Werner Heisenberg and Niels Bohr in the 1920s. Heisenberg subsequently wrote a book, Physics and Philosophy to explain how atomic physics changed our interpretation of reality. According to Heisenberg, the traditional scientific view of material objects and particles existing objectively, whether we observe them or not, could no longer be upheld. Rather than existing as solid objects, subatomic particles existed as “probability waves” — in Heisenberg’s words, “something standing in the middle between the idea of an event and the actual event, a strange kind of physical reality just in the middle between possibility and reality.” (Physics and Philosophy, p. 41 — page numbers are taken from the 1999 edition published by Prometheus books). According to Heisenberg:

The probability function does . . . not describe a certain event but, at least during the process of observation, a whole ensemble of possible events. The observation itself changes the probability function discontinuously; it selects of all possible events the actual one that has taken place. . . Therefore, the transition from the ‘possible’ to the ‘actual’ takes place during the act of observation. If we want to describe what happens in an atomic event, we have to realize that the word ‘happens’ can apply only to the observation, not to the state of affairs between two observations. It applies to the physical, not the psychical act of observation, and we may say that the transition from the ‘possible’ to the ‘actual’ takes place as soon as the interaction of the object with the measuring device, and thereby with the rest of the world, has come into play. (pp. 54-55)

Later in his book, Heisenberg writes: “If one wants to give an accurate description of the elementary particle — and here the emphasis is on the word ‘accurate’ — the only thing that can be written down as a description is a probability function.” (p. 70) Moreover,

In the experiments about atomic events we have to do with things and facts, with phenomena that are just as real as any phenomena in daily life. But the atoms or the elementary particles themselves are not as real; they form a world of potentialities or possibilities rather than one of things or facts. (p. 186)

This sounds downright crazy to most people. The idea that the solid objects of our everyday experience are made up not of smaller solid parts but of probabilities and potentialities seems bizarre. However, Heisenberg noted that observed events at the subatomic level did seem to fit the interpretation of reality given by the Greek philosopher Aristotle over 2000 years ago. According to Aristotle, reality was a combination of matter and form, but matter was not a set of solid particles but rather potential, an indefinite possibility or power that became real only when it was combined with form to make actual existing things. (pp. 147-49) To provide some rough analogies: a supply of wood can potentially be a table or a chair or a house — but it must be combined with the right form to become actually a table or a chair or a house. Likewise, a block of marble is potentially a statue of a man or a woman or an animal, but only when a sculptor shapes the marble into that particular form does the statue become actual. In other words, actuality (reality) equals potential plus form.

According to Heisenberg, Aristotle’s concept of potential was roughly equivalent to the concept of “energy” in modern physics, and “matter” was energy combined with form.

All the elementary particles are made of the same substance, which we may call energy or universal matter; they are just different forms in which the matter can appear.

If we compare this situation with the Aristotelian concepts of matter and form, we can say that the matter of Aristotle, which is mere ‘potential,’ should be compared to our concept of energy, which gets into ‘actuality’ by means of the form, when the elementary particle is created. (p. 160)

In fact, all modern physicists agree that matter is simply a form of energy (and vice versa). In the earliest stages of the universe, matter emerged out of energy, and that is how we got atoms in the first place. There is nothing inherently “solid” about energy, but energy can be transformed into particles, and particles can be transformed back into energy. According to Heisenberg, “Energy is in fact the substance from which all elementary particles, all atoms and therefore all things are made. . . .” (p. 63)

So what exactly is energy? Oddly enough, physicists have a hard time stating exactly what energy is. Energy is usually defined as the “capacity to do work” or the “capacity to cause movement,” but these definitions remain somewhat vague, and there is no specific mechanism or form that physicists can point to in order to describe energy. Gottfried Leibniz, who developed the first formula for measuring energy, referred to energy as vis viva or “living force,” a concept which is anthropomorphic and nearly theological.  In fact, there are so many different types of energy and so many different ways to measure these types of energy that many physicists are inclined to the view that energy is not a substance but just a mathematical abstraction. According to the great American physicist Richard Feynman, “It is important to realize that in physics today, we have no knowledge of what energy ‘is.’ We do not have a picture that energy comes in little blobs of a definite amount. It is not that way. It is an abstract thing in that it does not tell us the mechanism or the reason for the various formulas.” The only reason physicists know that energy exists is that they have performed numerous experiments over the years and have found that however energy is measured, the amount of energy in an isolated system always remains the same — energy can only be transformed, it can neither be created nor destroyed. Energy in itself has no form, and there is no such thing as “pure energy.” Oh, and energy is relative too — you have to specify the frame of reference when measuring energy, because the position and movement of the observer matters. For example, if you move toward a photon, its energy in that frame of reference will be greater; if you move away from a photon, its energy will be less.

In fact, the melding of relativity theory with quantum physics has further undermined materialism and our common sense notions of what it is to be “real.”  A 2013 article in Scientific American by Dr. Meinard Kuhlmann of Bielefeld University in Germany, “What is Real,” lays out some of these paradoxes of existence at the subatomic level. For example, scientists can create a vacuum in the laboratory, but when a Geiger counter is connected to the vacuum container, it will detect matter. In addition, a vacuum will contain no particles according to an observer at rest, but will contain many particles from the perspective of an accelerating observer! Kuhlmann concludes: “If the number of particles is observer-dependent, then it seems incoherent to assume that particles are basic. We can accept many features to be observer-dependent but not the fact of how many basic building blocks there are.”

So, if the smallest parts of reality are not tiny material objects, but potentialities and probabilities, which vary according to the observer, then how do we get what appears to be solid material objects, from rocks to mountains to trees to houses and cars? According to Kuhlmann, some philosophers and scientists say that we need to think about reality as consisting entirely of relations. In this view, subatomic particles have no definite position in space until they are observed because determining position in space requires a relation between an observer and observed. Position is mere potential until there is a relation. You may have heard of the old puzzle, “If a tree falls in a forest, and no one is around to hear it, does it make a sound?” The answer usually given is that sound requires a perceiver who can hear, and it makes no sense to talk about “sound” without an observer with functional ears. In the past, scientists believed that if objects were broken down into their smallest parts, we would discover the foundation of reality; but in the new view, when you break down larger objects into their smallest parts, you are gradually taking apart the relations that compose the object, until what you have left is potential. It is the relations between subatomic particles and observers that give us solidity.

Another interpretation Kuhlmann discusses is that the fundamental basis of reality is bundles of properties. In this view, reality consists not of objects or things, but of properties such as shape, mass, color, position, velocity, spin, etc. We think of things as being fundamentally real and properties as being attributes of things. But in this new view, properties are fundamentally real and “things” are what we get when properties are bundled together in certain ways. For example, we recognize a red rubber ball as being a red rubber ball because our years of experience and learning in our culture have given us the conceptual category of “red rubber ball.” An infant does not have this conceptual category, but merely sees the properties: the roundness of the shape, the color red, the elasticity of the rubber. As the infant grows up, he or she learns that this bundle of properties constitutes the “thing” known as a red rubber ball; but it is the properties that are fundamental, not the thing. So when scientists break down objects into smaller and smaller pieces in their particle accelerators, they are gradually taking apart the bundles of properties until the particles no longer even have a definite position in space!

So whether we thing of reality as consisting of relations or bundles of properties, there is nothing “solid” underlying everything.  Reality consists of properties or qualities that emerge out of potential, and then bundle together in certain ways. Over time, some bundles or relations come apart, and new bundles or relations emerge. Finally, in the evolution of life, there is an explosion of new bundles of properties, with some bundles containing a staggering degree of organizational complexity, built incrementally over millions of years. The proper interpretation of this organizational complexity will be discussed in a subsequent post.

 

How Random is Evolution?

Man is the product of causes which had no prevision of the end they were achieving . . . his origin, his growth, his hopes and fears, his loves and his beliefs, are but the outcome of accidental collocations of atoms. . . .” – Bertrand Russell

In high school or college, you were probably taught that human life evolved from lower life forms, and that evolution was a process in which random mutations in DNA, the genetic code, led to the development of new life forms. Most mutations are harmful to an organism, but some mutations confer an advantage to an organism, and that organism is able to flourish and pass down its genes to subsequent generations –hence, “survival of the fittest.”

Many people reject the theory of evolution because it seemingly removes the role of God in the creation of life and of human beings and suggests that the universe is highly disordered. But all available evidence suggests that life did evolve, that the world and all of its life was not created in six days, as the Bible asserted. Does this mean that human life is an accident, that there is no larger intelligence or purpose to the universe?

I will argue that although evolution does indeed suggest that the traditional Biblical view of life’s origins are incorrect, people have the wrong impression of (1) what randomness in evolution means and (2) how large the role of randomness is in evolution. While it is true that individual micro-events in evolution can be random, these events are part of a larger system, and this system can be highly ordered even if particular micro-events are random. Moreover, recent research in evolution indicates that in addition to random mutation, organisms can respond to environmental factors by changing in a manner that is purposive, not random, in a direction that increases their ability to thrive.

____________________

So what does it mean to say that something is “random”? According to the Merriam-Webster dictionary, “random” means “a haphazard course,” “lacking a definite plan, purpose, or pattern.” Synonyms for “random” include the words “aimless,” “arbitrary,” and “slapdash.” It is easy to see why when people are told that evolutionary change is a random process, that many reject the idea outright. This is not necessarily a matter of unthinking religious prejudice. Anyone who has examined nature and the biology of animals and human beings can’t help but be impressed by how enormously complex and precisely ordered these systems are. The fact of the matter is that it is extraordinarily difficult to build and maintain life; death and nonexistence is relatively easy. But what does it mean to lack “a definite plan, purpose, or pattern”? I contend that this definition, insofar as it applies to evolution, only refers to the particular micro-events of evolution when considered in isolation and not the broader outcome or the sum of the events.

Let me illustrate what I mean by presenting an ordinary and well-known case of randomness: rolling a single die. A die is a cube with six sides and a number, 1-6, on each side. The outcome of any roll of the die is random and unpredictable; if you roll a die once, the outcome will be unpredictable. If you roll a die multiple times, each outcome, as well as the particular sequence of outcomes, will be unpredictable. But if you look at the broader, long-term outcome after 1000 rolls, you will see this pattern: an approximately equal number of ones, twos, threes, fours, fives, and sixes will come up, and the average value of all events will be 3.5.

Why is this? Because the die itself is a highly-precise ordered system. Each die must have equally sided lengths on all sides and an equal distribution of density/weight throughout in order to make the outcome truly unpredictable, otherwise a gambler who knows the design of the die may have an edge. One die manufacturer brags, “With tolerances less than one-third the thickness of a human hair, nothing is left to chance.” [!] In fact, a common method of cheating with dice is to shave one or more sides or insert a weight into one end of the die. This results in a system that is also precisely ordered, but in a way that makes certain outcomes more likely. After a thousand rolls of the die, one or more outcomes will come up more frequently, and this pattern will stand out suspiciously. But the person who cheated by tilting the odds in one direction may have already escaped with his or her winnings.

If you look at how casinos make money, it is precisely by structuring the rules of each game to give the edge to the casino that allows them to make a profit in the long run. The precise outcome of each particular game is not known with certainty, the particular sequence of outcomes is not known, and the balance sheet of the casino at the end of the night cannot be predicted. But there is definitely a pattern: in the long run, the sum of events results in the casino winning and making a profit, while the players as a group will lose money. When casinos go out of business, it is generally because they can’t attract enough customers, not because they lose too many games.

The ability to calculate the sum of a sequence of random events is the basis of the so-called “Monte Carlo” method in mathematics. Basically, the Monte Carlo method involves setting certain parameters, selecting random inputs until the number of inputs is quite large, and then calculating the final result. It’s like throwing darts at a dartboard repeatedly and examining the pattern of holes. One can use this method with 30,000 randomly plotted points to calculate the value of pi to within 0.07 percent.

So if randomness can exist within a highly precise order, what is the larger order within which the random mutations of evolution operate? One aspect of this order is the bonding preferences of atoms, which are responsible not only for shaping how organisms arise, but how organisms eventually develop into astonishingly complex and wondrous forms. Without atomic bonds, structures would fall apart as quickly as they came together, preventing any evolutionary advances. The bonding preferences of atoms shape the parameters of development and result in molecular structures (DNA, RNA, and proteins) that retain a memory or blueprint, so that evolutionary change is incremental. The incremental development of organisms allows for the growth of biological forms that are eventually capable of running at great speeds, flying long distances, swimming underwater, forming societies, using tools, and, in the case of humans, building technical devices of enormous sophistication.

The fact of incremental change that builds upon previous advances is a feature of evolution that makes it more than a random process. This is illustrated by biologist Richard Dawkins’ “weasel program,” a computer simulation of how evolution works by combining random micro-events with the retaining of previous structures so that over time a highly sophisticated order can develop. The weasel program is based on the “infinite monkey theorem,” the fanciful proposal that an infinite number of monkeys with an infinite number of typewriters would eventually produce the works of Shakespeare. This theorem has been used to illustrate how order could conceivably emerge from random and mindless processes. What Dawkins did, however, was write a computer program to write just one sentence from Shakespeare’s Hamlet: “Methinks it is like a weasel.” Dawkins structured the computer program to begin with a single random sentence, reproduce this sentence repeatedly, but add random errors (“mutations”) in each “generation.” If the new sentence was at least somewhat closer to the target phrase “Methinks it is like a weasel,” that sentence became the new parent sentence. In this way, subsequent generations would gradually assume the form of the correct sentence. For example:

Generation 01: WDLTMNLT DTJBKWIRZREZLMQCO P
Generation 02: WDLTMNLT DTJBSWIRZREZLMQCO P
Generation 10: MDLDMNLS ITJISWHRZREZ MECS P
Generation 20: MELDINLS IT ISWPRKE Z WECSEL
Generation 30: METHINGS IT ISWLIKE B WECSEL
Generation 40: METHINKS IT IS LIKE I WEASEL
Generation 43: METHINKS IT IS LIKE A WEASEL

The Weasel program is a great example of how random change can produce order over time, BUT only under highly structured conditions, with a defined goal and a retaining of those steps toward that goal. Without these conditions, a computer program randomly selecting letters would be unlikely to produce the phrase “Methinks it is like a weasel” in the lifetime of the universe, according to Dawkins!

It is the retaining of most evolutionary advances, while allowing a small degree of randomness, that allows evolution to produce increasingly complex life forms. Reproduction has some random elements in it, but is actually remarkably precise and effective in producing offspring at least roughly similar to their parents. It is not the case that a female human is equally as likely to give birth to a dog, a pig, or a chicken as to give birth to a human. It would be very strange indeed if evolution was that random!

But there is even more to the story of evolution.

Recent research in biology has indicated that there are factors in nature that tend to push development in certain directions favorable to an organism’s flourishing. Even if you imagine evolution in nature as a huge casino, with a lot of random events, scientists have discovered that the players are strategizing: they are increasing or decreasing their level of gambling in response to environmental conditions, shaving the dice to obtain more favorable outcomes, and cooperating with each other to cheat the casino!

For example, it is now recognized among biologists that a number of microorganisms are capable to some extent of controlling their rate of mutation, increasing the rate of mutation during times of environmental challenge and stress, and suppressing the rate of mutation during times of peace and abundance. As a result of accelerated mutations, certain bacteria can acquire the ability to utilize new sources of nutrition, overcoming the threat of extinction arising from the depletion of its original food source. In other words, in response to feedback from the environment, organisms can decide to try to preserve as much of their genome as they can or experiment wildly in the hope of finding a solution to new environmental challenges.

The organism known as the octopus (a cephalopod) has a different strategy: it actively suppresses mutation in DNA and prefers to recode its RNA in response to environmental challenges. For example, octopi in the icy waters of the Antarctic recode their RNA in order to keep their nerves firing in cold water. This response is not random but directly adaptive. RNA recoding in octopi and other cephalopods is particularly prevalent in proteins responsible for the nervous system, and it is believed by scientists that this may explain why octopi are among the most intelligent creatures on Earth.

The cephalopods are somewhat unusual creatures, but there is evidence that other organisms can also adapt in a nonrandom fashion to their environment by employing molecular factors that suppress or activate the expression of certain genes — the study of these molecular factors is known as “epigenetics.” For example, every cell in a human fetus has the same DNA, but this DNA can develop into heart tissue, brain tissue, skin, liver, etc., depending on which genes are expressed and which genes are suppressed. The molecular factors responsible for gene expression are largely proteins, and these epigenetic factors can result in heritable changes in response to environmental conditions that are definitely not random.

The water flea, for example, can come in different variations, despite the same DNA, in response to the environmental conditions of the mother flea. If the mother flea experienced a large predator threat, the children of that flea would develop a spiny helmet for protection; otherwise the children would develop normal helmet-less heads. Studies have found that in other creatures, a particular diet can turn certain genes on or off, modifying offspring without changing DNA. In one study, mice that exercised not only enhanced their brain function, their children had enhanced brain function as well, though the effect only lasted one generation if exercise stopped. The Mexican cave fish once had eyes, but in its new dark environment, epigenetics has been responsible for turning off the genes responsible for eye development; its original DNA has been unchanged. (The hypothesized reason for this is that organisms tend to discard traits that are not needed in order to conserve energy.)

Recent studies of human beings have uncovered epigenetic adaptations that have allowed humans to flourish in such varied environments as deserts, jungles, and polar ice. The Oromo people of Ethiopia, recent settlers to the highlands of that country, have had epigenetic changes to their immune system to cope with new microbiological threats. Other populations in Africa have genetic mutations that have the twin effect of protecting against malaria but causing sickle cell anemia — recently it has been found that these mutations are being silenced in the face of declining malarial threats.  Increasingly, scientists are recognizing the large role of epigenetics in the evolution of human beings:

By encouraging the variations and adaptability of our species, epigenetic mechanisms for controlling gene expression have ensured that humanity could survive and thrive in any number of environments. Epigenetics is a significant part of the reason our species has become so adaptable, a trait that is often thought to distinguish us from what we often think of as lesser-evolved and developed animals that we inhabit this earth with. Indeed, it can be argued that epigenetics is responsible for, and provided our species with, the tools that truly made us unique in our ability to conquer any habitat and adapt to almost any climate. (Bioscience Horizons, 1 January 2017)

In fact, despite the hopes of scientists everywhere that the DNA sequencing of the human genome would provide a comprehensive biological explanation of human traits, it has been found that epigenetics may play a larger role in the complexity of human beings than the number of genes. According to one researcher, “[W]e found out that the human genome is probably not as complex and doesn’t have as many genes as plants do. So that, then, made us really question, ‘Well, if the genome has less genes in this species versus this species, and we’re more complex potentially, what’s going on here?'”

One additional nonrandom factor in evolution should be noted: the role of cooperation between organisms, which may even lead to biological mergers that create a new organism. Traditionally, evolution has been thought of primarily as random changes in organisms followed by a struggle for existence between competing organisms. It is a dark view of life. But increasingly, biologists have discovered that cooperation between organisms, known as symbiosis, also plays a role in the evolution of life, including the evolution of human beings.

Why was the role of cooperation in evolution overlooked until relatively recently? A number of biologists have argued that the society and culture of Darwin’s time played a significant role in shaping his theory — in particular, Adam Smith’s book The Wealth of Nations. In Smith’s view, the basic unit of economics was the self-interested individual on the marketplace, who bought and sold goods without any central planner overseeing his activities. Darwin essentially adopted this view and applied it to biological organisms: as businesses competed on the marketplace and flourished or died depending on how efficient they were, so too did organisms struggle against each other, with only the fittest surviving.

However, even in the late nineteenth century, a number of biologists noted cases in nature in which cooperation played a prominent role in evolution. In the 1880s, the Scottish biologist Patrick Geddes proposed that the reason the giant green anemone contained algal (algae) cells as well as animal cells was because of the evolution of a cooperative relationship between the two types of cells that resulted in a merger in which the alagal cells were merged into the animal flesh of the anemone. In the latter part of the twentieth century, biologist Lynn Margulis carried this concept further. Margulis argued that the most fundamental building block of advanced organisms, the cell, was the result of a merger between more primitive bacteria billions of years ago. By merging, each bacterium lent a particular biological advantage to the other, and created a more advanced life form. This theory was regarded with much skepticism at the time it was proposed, but over time it became widely accepted. The traditional picture of evolution as one in which new species diverge from older species and compete for survival has had to be supplemented with the picture of cooperative behavior and mergers. As one researcher has argued, “The classic image of evolution, the tree of life, almost always exclusively shows diverging branches; however, a banyan tree, with diverging and converging branches is best.”

More recent studies have demonstrated the remarkable level of cooperation between organisms that is the basis for human life. One study from a biologist at the University of Cambridge has proposed that human beings have as many as 145 genes that have been borrowed from bacteria, other single-celled organisms, and viruses. In addition, only about half of the human body is made up of human cells — the other half consists of trillions of microbes and quadrillions of viruses that largely live in harmony with human cells. Contrary to the popular view that microbes and viruses are threats to human beings, most of these microbes and viruses are harmless or even beneficial to humans. Microbes are essential in digesting food and synthesizing vitamins, and even the human immune system is partly built and partly operated by microbes! If, as one biologist has argued, each human being is a “society of cells,” it would be equally valid to describe a human being as a “society of cells and microbes.”

Is there randomness in evolution? Certainly. But the randomness is limited in scope, it takes place within a larger order which preserves incremental gains, and it provides the experimentation and diversity organisms need to meet new challenges and new environments. Alongside this randomness are epigenetic adaptations that turn genes on or off in response to environmental influences and the cooperative relations of symbiosis, which can build larger and more complex organisms. These additional facts do not prove the existence of a creator-God that oversees all of creation down to the most minute detail; but they do suggest a purposive order within which an astonishing variety of life forms can emerge and grow.

 

Review of “Modern Physics and Ancient Faith,” by Stephen Barr

I recently came across the book Modern Physics and Ancient Faith by Stephen Barr, a professor of physics at the University of Delaware. First published in 2003, the book was reprinted in 2013 by the University of Notre Dame press. I was initially skeptical of this book because reviews and praise for the book seemed to be limited mostly to religious and conservative publications. But after reading it, I have to say that the science in the book is solid, the author is reasonably open-minded, and the paucity of reviews of this book in scientific publications more likely reflects an unthinking prejudice rather than an informed judgment by scientists.

Modern Physics and Ancient Faith is not one of those books that attempts to prove the existence of God scientifically, an impossible task in any event. The book does show that belief in God is reasonable, that faith is not wholly irrational. But the main theme of the book is a critique of “scientific materialism,” a philosophy that emerged out of the scientific findings of the nineteenth century. What is “scientific materialism”? Barr summarizes the viewpoint as follows:

‘The universe more and more appears to be a vast, cold, blind, and pur­poseless machine. For a while it appeared that some things might escape the iron grip of science and its laws —perhaps Life or Mind. But the processes of life are now known to be just chemical reactions, involving the same ele­ments and the same basic physical laws that govern the behavior of all matter. The mind itself is, according to the overwhelming consensus of cognitive scientists, completely explicable as the performance of the biochemical computer called the brain. There is nothing in principle that a mind does which an artificial machine could not do just as well or even better. . . .

‘There is no evidence of a spiritual realm, or that God or souls are real. In fact, even if there did exist anything of a spiritual nature, it could have no influence on the visible world, because the material world is a closed sys­tem of physical cause and effect. Nothing external to it could affect its opera­tions without violating the precise mathematical relationships imposed by the laws of physics. . . .

‘All, therefore, is matter: atoms in ceaseless, aimless motion. In the words of Democritus, everything consists of’atoms and the void.’ Because the ulti­mate reality is matter, there cannot be any cosmic purpose or meaning, for atoms have no purposes or goals. . . .

‘Science has dethroned man. Far from being the center of things, he is now seen to be a very peripheral figure indeed. . . .  The human species is just one branch on an ancient evolutionary tree, and not so very different from some of the other branches—genetically we overlap more than 98 percent with chimpanzees. We are the product not of purpose, but of chance mutations. Bertrand Russell perfectly summed up man’s place in the cosmos when he called him ‘a curious accident in a backwater.” (pp. 19-20)

A great many educated people subscribe to many or most of these principles of scientific materialism. However, as Barr notes, these principles are based largely on scientific findings from the nineteenth century, and there have been a number of major advances in knowledge since then that have cast doubt on the principles of materialism. Barr refers to these newer findings as “plot twists.”

The first plot twist Barr notes is the “Big Bang” theory of the origins of the universe, first proposed in 1927 by Georges Lemaître, a Catholic priest and astronomer. Although taken for granted today, Lemaître’s idea of a universe emerging from a single, tiny point of concentrated energy was initially shocking and disturbing to many scientists. The predominant view of scientists for a number of centuries was that the universe existed eternally, with no beginning and no end. Even Einstein was disturbed by the Big Bang theory and thought it “abominable”; it took many years for him to accept it. I initially was skeptical of Barr’s claim that atheists and materialists were particularly disturbed by and opposed to the Big Bang, but a recent book on theories of the universe’s origins supports Barr’s claim (pp. 25-27).

Now it’s true that the Big Bang in itself doesn’t prove the existence of a creator. And there are many features of the universe which seem to argue against the idea of an omniscient and omnipotent creator, most prominently, the very gradual process of evolution, with its randomness and mass extinctions. Also, Georges Lemaître himself cautioned against any attempts to draw theological conclusions from his scientific work, and argued against the mixing of science and religion. Nevertheless, I don’t think it can be denied that the Big Bang is more compatible with the idea of a creator than the previously dominant theory of the eternal universe.

The second plot twist, according to Barr, is the gradual discovery of a deeper underlying harmony and beauty in the laws of physics, which suggest not blind and impersonal forces but a cosmic designer. Any one particular phenomenon can be explained by an impersonal law or mechanism, notes Barr; but when one looks at the structure of the universe as a whole, the question of a designer is “inescapable.” (Barr, p. 24) In addition, the story of the universe is not one of order emerging out of disorder or chaos, but rather order emerging out of a deeper order, rooted in mathematical structures and physical laws. Barr discusses a number of examples of this harmony and beauty, including symmetrical structures in nature, the growth of crystals, orbital ellipses, and particular mathematical equations that are simple yet also capable of resulting in highly complex orders.

I found this part of Barr’s book to be the least convincing. Symmetry in itself does not strike me as being particularly beautiful, though beautiful things may have symmetry as one of their properties. Furthermore, there are aspects of the universe that are definitely not beautiful, harmonious, or elegant, but rather messy, complicated, and wasteful. Elegance is something rightly valued by scientists, but there is no reason to believe that the underlying structure of nature is always fundamentally elegant, and scientists have sometimes been misguided into coming up with elegant solutions for phenomena that later turned out to be far messier and more complicated in reality. 

The third “plot twist” Barr discusses is, in my view, far more interesting and convincing: the discovery of “anthropic coincidences,” that is, features of the universe that suggest that the emergence of life, including intelligent life, is not an accident but is actually a predictable feature of the universe, built right into the structure. Barr accepts the theory of evolution and acknowledges that random mutations play a large role in the evolution of life (though he is skeptical that natural selection provides a complete explanation for the evolution of life). But Barr also argues that evolution proceeds from the foundation of a cosmic order which seems to be custom-made for the emergence of life. A good number of physical properties of the universe — the strong nuclear force, the creation of new elements through nuclear fusion, the stability of the proton, the strength of the electro-magnetic force, and other properties and processes — seem to be finely tuned to within a very narrow range of precision. Action outside the these strict boundaries of behavior set by the physical laws and constants of the universe would eliminate the possibility of life anywhere in the universe or even cause the universe to self-destruct. Again, these “anthropic coincidences” do not prove the existence of God, but they do seem to indicate that life is not just an accident of the universe, but an outcome built into the universe from the very beginning.

Barr acknowledges the argument of some physicists that the universe could well have different domains with different physical laws, or there could even be a large (or infinite) number of universes (the “multiverse”), each with a slightly different set of laws and constants. In this view, life only seems inevitable because we just happen to exist in a universe that has the right balance of laws and constants, and if the universe did not have this balance, no one would be there to observe that fact! But Barr is rightly skeptical of this argument, noting that it relies merely upon speculation, with no actual empirical evidence of other universes. If belief in God is unscientific because God is unobservable, then belief in an unobservable multiverse is also unscientific.

Barr devotes most of the remaining chapters of his book to refuting the scientific materialist view of humanity. In the materialist view, human beings are nothing more than biological mechanisms, made up of the same atoms that make up the rest of the universe, and therefore as determined and predictable as any other object in the universe. Humans have no soul apart from the body, no mind apart from the brain, and no real free will. Therefore, there is no reason to expect that artificial intelligence in advanced computers will be any different from human minds, aside from being superior. Barr rightly criticizes this view as the fallacy of reductionism, the notion that everything can be explained by reference to the parts that compose it. The problem with this view, as I have pointed out elsewhere, is that when parts join together, the resulting whole may be an entirely new phenomenon, with radically different properties from the parts.

Barr argues that although human beings may be made of materials, humans beings also have a spiritual side, defined as the possession of two crucial attributes: (1) an intellect capable of understanding; and (2) free will. As such, human beings are capable of transcending themselves, that is, go beyond their immediate desires and appetites, “perceive what is objectively true and beautiful,” (p. 168) and to freely choose good or evil.

It is precisely this quality of human beings that makes humans different from material objects, including computers. Barr points out that a computer can manipulate numbers and symbols, and do so much more quickly and efficiently than humans. But, he asks, in what way does a computer actually understand what it is doing? When you use a computer to calculate the sum of all annual profits for Corporation X, does the computer have any idea of what it is really doing? No — it is manipulating numbers and symbols, for the benefit of human beings who actually do have understanding. Likewise, the very notion of moral judgment and blame makes no sense when applied to a computer, because in practice we know that a computer does not have real understanding. In Barr’s words:

We do not really ‘blame’ a computer program for what it does; if anything, we blame its human programmers. We do not condemn a man-eating tiger, in the moral sense, or grow indignant at the typhoid bacillus. And yet we do feel that human beings can ‘deserve’ and that their behavior can be morally judged. We believe this precisely because we believe that human beings can make free choices. (p. 186)

More importantly, as Barr notes, the scientific materialist view of human beings as nothing more than machines is derived from scientific findings in earlier centuries, when scientists became increasingly capable of predicting the motions and actions of objects, large and small. Since human beings were nothing more than collections of material objects known as atoms, it stood to reason that human beings were also nothing more than predictable and determined in their actions. But early twentieth century research into the behavior of subatomic particles, known as “quantum physics,” overturned the view that the behavior of all objects could be predicted on the basis of predetermined laws of behavior. Rather, at the subatomic level, the behavior of objects were probabilistic, not determined; furthermore, the behavior of the particles could not be known until they interacted with an observer!

Today, it is widely acknowledged among scientists that the nineteenth century dream of a completely determined and predictable universe is an illusion, that the behavior of large solar, planetary, and sub-planetary bodies can be predicted to a large extent, but that other phenomenon remain less amenable to such study. Yet the materialist view of human beings as completely determined remains popular among many, including scientists who should know better. Barr rightly criticizes this view, noting that while quantum physics doesn’t prove that humans have free will, it does demolish the notion of complete determinism, and at the very least, creates space for free will.

However, while Barr effectively demolishes the scientific materialist view of human nature, I don’t think he demonstrates that the traditional Judeo-Christian view is entirely correct either. In this view, human beings have a soul that exists, or can exist, separately from the body, and this soul is imparted to human beings by God in a special act of creation. (p. 225) But rather than surmise that there is a separate spirit that possesses the material body and gives it life and understanding and free will, I think it makes more sense to adopt the outlook of theorists of emergence, that the correct combination and organization of parts can lead to the emergence of a whole organism that possesses life and understanding and free will, even if the parts themselves do not possess these qualities.

Another way to look at this issue is to carefully reexamine the whole notion of what “matter” is. Materialists conceive of humans as beings composed of collections of objects called atoms and can’t conceive how this collection of objects can possibly have understanding and free will. But human beings are not just made up of objects — we are beings of matter and energy. Physicists have defined “energy” as the capacity to do work, and if you think of human beings as collections of matter and energy, the attributes of life and understanding and free will no longer seem so mysterious: these attributes are synergistic expressions of a highly complex matter/energy combination.

One could go even further. Physicists have long noted that matter and energy are interchangeable, that matter can be transformed into energy and energy can be transformed into matter. According to the great physicist Werner Heisenberg, “[E]xperiments have shown the complete mutability of matter. All the elementary particles can, at sufficiently high energies, be transmuted into other particles, or they can simply be created from kinetic energy and can be annihilated into energy, for instance into radiation.” (Physics and Philosophy, p. 139.) In fact, the universe in its earliest stages began simply as energy and only gradually transformed some of that energy into matter, so even matter itself can be considered a form of condensed energy rather than a separate and unique entity. So you could think of humans as beings of energy, not just collections of objects — in which case consciousness and free will no longer seem so strange.

Overall, I think Barr’s book is largely convincing in his critique of scientific materialism. He does not provide scientific evidence for the existence of God, but he does make belief in God reasonable, as long as it is not a fundamentalist God. Indeed, any respectable book on science and religion is going to have to reject the notion that the Bible is literally true in all aspects if it is going to be properly scientific. I think Barr does occasionally engage in cherry-picking of evidence from religious thinkers and traditions that make it look as if early Jewish and Christian thinkers were more farsighted in their understanding of the universe than they actually were. But ultimately, judging a religion by its understanding of natural causation is a risky task; when new discoveries are made that overturn the old claims, what does one do? It would be absurd to deny the new discoveries in order to save the religion.

Religious knowledge should be considered primarily a form of transcendent knowledge about the Good, not empirical knowledge about what and how the world is. The Catholic priest-astronomer Georges Lemaître was correct in rejecting attempts by others to use his Big Bang theory as evidence for the merits of Christianity. The best test of a religion is not whether it explains nature, but whether it actually makes human beings and human civilization better.

Zen and the Art of Science: A Tribute to Robert Pirsig

Author Robert Pirsig, widely acclaimed for his bestselling books, Zen and the Art of Motorcycle Maintenance (1974) and Lila (1991), passed away in his home on April 24, 2017. A well-rounded intellectual equally at home in the sciences and the humanities, Pirsig made the case that scientific inquiry, art, and religious experience were all particular forms of knowledge arising out of a broader form of knowledge about the Good or what Pirsig called “Quality.” Yet, although Pirsig’s books were bestsellers, contemporary debates about science and religion are oddly neglectful of Pirsig’s work. So what did Pirsig claim about the common roots of human knowledge, and how do his arguments provide a basis for reconciling science and religion?

Pirsig gradually developed his philosophy as response to a crisis in the foundations of scientific knowledge, a crisis he first encountered while he was pursuing studies in biochemistry. The popular consensus at the time was that scientific methods promised objectivity and certainty in human knowledge. One developed hypotheses, conducted observations and experiments, and came to a conclusion based on objective data. That was how scientific knowledge accumulated.

However, Pirsig noted that, contrary to his own expectations, the number of hypotheses could easily grow faster than experiments could test them. One could not just come up with hypotheses – one had to make good hypotheses, ones that could eliminate the need for endless and unnecessary observations and testing. Good hypotheses required mental inspiration and intuition, components that were mysterious and unpredictable.  The greatest scientists were precisely like the greatest artists, capable of making immense creative leaps before the process of testing even began.  Without those creative leaps, science would remain on a never-ending treadmill of hypothesis development – this was the “infinity of hypotheses” problem.  And yet, the notion that science depended on intuition and artistic leaps ran counter to the established view that the scientific method required nothing more than reason and the observation and recording of an objective reality.

Consider Einstein. One of history’s greatest scientists, Einstein hardly ever conducted actual experiments. Rather, he frequently engaged in “thought experiments,” imagining what it would be like to chase a beam of light, what it would feel like to be in a falling elevator, and what a clock would look like if the streetcar he was riding raced away from the clock at the speed of light.

One of the most fruitful sources of hypotheses in science is mathematics, a discipline which consists of the creation of symbolic models of quantitative relationships. And yet, the nature of mathematical discovery is so mysterious that mathematicians themselves have compared their insights to mysticism. The great French mathematician Henri Poincare believed that the human mind worked subliminally on problems, and his work habit was to spend no more than two hours at a time working on mathematics. Poincare believed that his subconscious would continue working on problems while he conducted other activities, and indeed, many of his great discoveries occurred precisely when he was away from his desk. John von Neumann, one of the best mathematicians of the twentieth century, also believed in the subliminal mind. He would sometimes go to sleep with a mathematical problem on his mind and wake up in the middle of the night with a solution. The Indian mathematical genius Srinivasa Ramanujan was a Hindu mystic who believed that solutions were revealed to him in dreams by the goddess Namagiri.

Intuition and inspiration were human solutions to the infinity-of-hypotheses problem. But Pirsig noted there was a related problem that had to be solved — the infinity of facts.  Science depended on observation, but the issue of which facts to observe was neither obvious nor purely objective.  Scientists had to make value judgments as to which facts were worth close observation and which facts could be safely overlooked, at least for the moment.  This process often depended heavily on an imprecise sense or feeling, and sometimes mere accident brought certain facts to scientists’ attention. What values guided the search for facts? Pirsig cited Poincare’s work The Foundations of Science. According to Poincare, general facts were more important than particular facts, because one could explain more by focusing on the general than the specific. Desire for simplicity was next – by beginning with simple facts, one could begin the process of accumulating knowledge about nature without getting bogged down in complexity at the outset. Finally, interesting facts that provided new findings were more important than facts that were unimportant or trivial. The point was not to gather as many facts as possible but to condense as much experience as possible into a small volume of interesting findings.

Research on the human brain supports the idea that the ability to value is essential to the discernment of facts.  Professor of Neuroscience Antonio Damasio, in his book Descartes’ Error: Emotion, Reason, and the Human Brain, describes several cases of human beings who lost the part of their brain responsible for emotions, either because of an accident or a brain tumor.  These persons, some of whom were previously known as shrewd and smart businessmen, experienced a serious decline in their competency after damage took place to the emotional center of their brains.  They lost their capacity to make good decisions, to get along with other people, to manage their time, or to plan for the future.  In every other respect, these persons retained their cognitive abilities — their IQs remained above normal and their personality tests resulted in normal scores.  The only thing missing was their capacity to have emotions.  Yet this made a huge difference.  Damasio writes of one subject, “Elliot”:

Consider the beginning of his day: He needed prompting to get started in the morning and prepare to go to work.  Once at work he was unable to manage his time properly; he could not be trusted with a schedule.  When the job called for interrupting an activity and turning to another, he might persist nonetheless, seemingly losing sight of his main goal.  Or he might interrupt the activity he had engaged, to turn to something he found more captivating at that particular moment.  Imagine a task involving reading and classifying documents of a given client.  Elliot would read and fully understand the significance of the material, and he certainly knew how to sort out the documents according to the similarity or disparity of their content.  The problem was that he was likely, all of a sudden, to turn from the sorting task he had initiated to reading one of those papers, carefully and intelligently, and to spend an entire day doing so.  Or he might spend a whole afternoon deliberating on which principle of categorization should be applied: Should it be date, size of document, pertinence to the case, or another?   The flow of work was stopped. (p. 36)

Why did the loss of emotion, which might be expected to improve decision-making by making these persons coldly objective, result in poor decision-making instead?  According to Damasio, without emotions, these persons were unable to value, and without value, decision-making in the face of infinite facts became hopelessly capricious or paralyzed, even with normal or above-normal IQs.  Damasio noted, “the cold-bloodedness of Elliot’s reasoning prevented him from assigning different values to different options, and made his decision-making landscape hopelessly flat.” (p. 51) Damasio discusses several other similar case studies.

So how would it affect scientific progress if all scientists were like the subjects Damasio studied, free of emotion, and therefore, hypothetically capable of perfect objectivity?  Well it seems likely that science would advance very slowly, at best, or perhaps not at all.  After all, the same tools for effective decision-making in everyday life are needed for the scientific enterprise as well. A value-free scientist would not only be unable to sustain the social interaction that science requires, he or she would be unable to develop a research plan, manage his or her time, or stick to a research plan.

_________

Where Pirsig’s philosophy becomes particularly controversial and difficult to understand is in his approach to the truth. The dominant view of truth today is known as the “correspondence” theory of truth – that is, any human statement that is true must correspond precisely to something objectively real. In this view, the laws of physics and chemistry are real because they correspond to actual events that can be observed and demonstrated. Pirsig argues on the contrary that in order to understand reality, human beings must invent symbolic and conceptual models, that there is a large creative component to these models (it is not just a matter of pure correspondence to reality), and that multiple such models can explain the same reality even if they are based on wholly different principles. Math, logic, and even the laws of physics are not “out there” waiting to be discovered – they exist in the mind, which doesn’t mean that these things are bad or wrong or unreal.

There are several reasons why our symbolic and conceptual models don’t correspond literally to reality, according to Pirsig. First, there is always going to be a gap between reality and the concepts we use to describe reality, because reality is continuous and flowing, while concepts are discrete and static. The creation of concepts necessarily calls for cutting reality into pieces, but there is no one right way to divide reality, and something is always lost when this is done. In fact, Pirsig noted, our very notions of subjectivity and objectivity, the former allegedly representing personal whims and the latter representing truth, rested upon an artificial division of reality into subjects and objects; in fact, there were other ways of dividing reality that could be just as legitimate or useful. In addition, concepts are necessarily static – they can’t be always changing or we would not be able to make sense of them. Reality, however, is always changing. Finally, describing reality is not always a matter of using direct and literal language but may require analogy and imaginative figures of speech.

Because of these difficulties in expressing reality directly, a variety of symbolic and conceptual models, based on widely varying principles, are not only possible but necessary – necessary for science as well as other forms of knowledge. Pirsig points to the example of the crisis that occurred in mathematics in the nineteenth century. For many centuries, it was widely believed that geometry, as developed by the ancient Greek mathematician Euclid, was the most exact of all of the sciences.  Based on a small number of axioms from which one could deduce multiple propositions, Euclidean geometry represented a nearly perfect system of logic.  However, while most of Euclid’s axioms were seemingly indisputable, mathematicians had long experienced great difficulty in satisfactorily demonstrating the truth of one of the chief axioms on which Euclidean geometry was based. This slight uncertainty led to an even greater crisis of uncertainty when mathematicians discovered that they could reverse or negate this axiom and create alternative systems of geometry that were every bit as logical and valid as Euclidean geometry.  The science of geometry was gradually replaced by the study of multiple geometries. Pirsig cited Poincare, who pointed out that the principles of geometry were not eternal truths but definitions and that the test of a system of geometry was not whether it was true but how useful it was.

So how do we judge the usefulness or goodness of our symbolic and conceptual models? Traditionally, we have been told that pure objectivity is the only solution to the chaos of relativism, in which nothing is absolutely true. But Pirsig pointed out that this hasn’t really been how science has worked. Rather, models are constructed according to the often competing values of simplicity and generalizability, as well as accuracy. Theories aren’t just about matching concepts to facts; scientists are guided by a sense of the Good (Quality) to encapsulate as much of the most important knowledge as possible into a small package. But because there is no one right way to do this, rather than converging to one true symbolic and conceptual model, science has instead developed a multiplicity of models. This has not been a problem for science, because if a particular model is useful for addressing a particular problem, that is considered good enough.

The crisis in the foundations of mathematics created by the discovery of non-Euclidean geometries and other factors (such as the paradoxes inherent in set theory) has never really been resolved. Mathematics is no longer the source of absolute and certain truth, and in fact, it never really was. That doesn’t mean that mathematics isn’t useful – it certainly is enormously useful and helps us make true statements about the world. It’s just that there’s no single perfect and true system of mathematics. (On the crisis in the foundations of mathematics, see the papers here and here.) Mathematical axioms, once believed to be certain truths and the foundation of all proofs, are now considered definitions, assumptions, or hypotheses. And a substantial number of mathematicians now declare outright that mathematical objects are imaginary, that particular mathematical formulas may be used to model real events and relationships, but that mathematics itself has no existence outside the human mind. (See The Mathematical Experience by Philip J. Davis and Reuben Hersh.)

Even some basic rules of logic accepted for thousands of years have come under challenge in the past hundred years, not because they are absolutely wrong, but because they are inadequate in many cases, and a different set of rules is needed. The Law of the Excluded Middle states that any proposition must be either true or false (“P” or “not P” in symbolic logic). But ever since mathematicians discovered propositions which are possibly true but not provable, a third category of “possible/unknown” has been added. Other systems of logic have been invented that use the idea of multiple degrees of truth, or even an infinite continuum of truth, from absolutely false to absolutely true.

The notion that we need multiple symbolic and conceptual models to understand reality remains controversial to many. It smacks of relativism, they argue, in which every person’s opinion is as valid as another person’s. But historically, the use of multiple perspectives hasn’t resulted in the abandonment of intellectual standards among mathematicians and scientists. One still needs many years of education and an advanced degree to obtain a job as a mathematician or scientist, and there is a clear hierarchy among practitioners, with the very best mathematicians and scientists working at the most prestigious universities and winning the highest awards. That is because there are still standards for what is good mathematics and science, and scholars are rewarded for solving problems and advancing knowledge. The fact that no one has agreed on what is the One True system of mathematics or logic isn’t relevant. In fact, physicist Stephen Hawking has argued:

[O]ur brains interpret the input from our sensory organs by making a model of the world. When such a model is successful at explaining events, we tend to attribute to it, and to the elements and concepts that constitute it, the quality of reality or absolute truth. But there may be different ways in which one could model the same physical situation, with each employing different fundamental elements and concepts. If two such physical theories or models accurately predict the same events, one cannot be said to be more real than the other; rather we are free to use whichever model is more convenient (The Grand Design, p. 7).

Among the most controversial and mind-bending claims Pirsig makes is that the very laws of nature themselves exist only in the human mind. “Laws of nature are human inventions, like ghosts,” he writes. Pirsig even remarks that it makes no sense to think of the law of gravity existing before the universe, that it only came into existence when Isaac Newton thought of it. It’s an outrageous claim, but if one looks closely at what the laws of nature actually are, it’s not so crazy an argument as it first appears.

For all of the advances that science has made over the centuries, there remains a sharp division of views among philosophers and scientists on one very important issue: are the laws of nature actual causal powers responsible for the origins and continuance of the universe or are the laws of nature summary descriptions of causal patterns in nature? The distinction is an important one. In the former view, the laws of physics are pre-existing or eternal and possess god-like powers to create and shape the universe; in the latter view, the laws have no independent existence – we are simply finding causal patterns and regularities in nature that allow us to predict and we call these patterns “laws.”

One powerful argument in favor of the latter view is that most of the so-called “laws of nature,” contrary to the popular view, actually have exceptions – and sometimes the exceptions are large. That is because the laws are simplified models of real phenomena. The laws were cobbled together by scientists in order to strike a careful balance between the values of scope, predictive accuracy, and simplicity. Michael Scriven, a mathematician and philosopher at Claremont Graduate University, has noted that as a result of this balance of values, physical laws are actually approximations that apply only within a certain range. This point has also been made more recently by Ronald Giere, a professor of philosophy at the University of Minnesota, in Science Without Laws and Nancy Cartwright of the University of California at San Diego in How the Laws of Physics Lie.

Newton’s law of universal gravitation, for example, is not really universal. It becomes increasingly inaccurate under conditions of high gravity and very high velocities, and at the atomic level, gravity is completely swamped by other forces. Whether one uses Newton’s law depends on the specific conditions and the level of accuracy one requires. Newton’s laws of motion also have exceptions, depending on the force, distance, and speed. Kepler’s laws of planetary motion are an approximation based on the simplifying assumption of a planetary system consisting of one planet. The ideal gas law is an approximation which becomes inaccurate under conditions of low temperature and/or high pressure. The law of multiple proportions works for simple molecular compounds, but often fails for complex molecular compounds. Biologists have discovered so many exceptions to Mendel’s laws of genetics that some believe that Mendel’s laws should not even be considered laws.

So if we think of laws of nature as being pre-existing, eternal commandments, with god-like powers to shape the universe, how do we account for these exceptions to the laws? The standard response by scientists is that their laws are simplified depictions of the real laws. But if that is the case, why not state the “real” laws? Because by the time we wrote down the real laws, accounting for every possible exception, we would have an extremely lengthy and detailed description of causation that would not recognizably be a law. The whole point of the laws of nature was to develop tools by which one could predict a large number of phenomena (scope), maintain a good-enough correspondence to reality (accuracy), and make it possible to calculate predictions without spending an inordinate amount of time and effort (simplicity). That is why although Einstein’s conception of gravity and his “field equations” have supplanted Newton’s law of gravitation, physicists still use Newton’s “law” in most cases because it is simpler and easier to use; they only resort to Einstein’s complex equations when they have to! The laws of nature are human tools for understanding, not mathematical gods that shape the universe. The actual practice of science confirms Pirsig’s point that the symbolic and conceptual models that we create to understand reality have to be judged by how good they are – simple correspondence to reality is insufficient and in many cases is not even possible anyway.

_____________

 

Ultimately, Pirsig concluded, the scientific enterprise is not that different from the pursuit of other forms of knowledge – it is based on a search for the Good. Occasionally, you see this acknowledged explicitly, when mathematicians discuss the beauty of certain mathematical proofs or results, as defined by their originality, simplicity, ability to solve many problems at once, or their surprising nature. Scientists also sometimes write about the importance of elegance in their theories, defined as the ability to explain as much as possible, as clearly as possible, and as simply as possible. Depending on the field of study, the standards of judgment may be different, the tools may be different, and the scope of inquiry is different. But all forms of human knowledge — art, rhetoric, science, reason, and religion — originate in, and are dependent upon, a response to the Good or Quality. The difference between science and religion is that scientific models are more narrowly restricted to understanding how to predict and manipulate natural phenomena, whereas religious models address larger questions of meaning and value.

Pirsig did not ignore or suppress the failures of religious knowledge with regard to factual claims about nature and history. The traditional myths of creation and the stories of various prophets were contrary to what we know now about physics, biology, paleontology, and history. In addition, Pirsig was by no means a conventional theist — he apparently did not believe that God was a personal being who possessed the attributes of omniscience and omnipotence, controlling or potentially controlling everything in the universe.

However, Pirsig did believe that God was synonymous with the Good, or “Quality,” and was the source of all things.  In fact, Pirsig wrote that his concept of Quality was similar to the “Tao” (the “Way” or the “Path”) in the Chinese religion of Taoism. As such, Quality was the source of being and the center of existence. It was also an active, dynamic power, capable of bringing about higher and higher levels of being. The evolution of the universe, from simple physical forms, to complex chemical compounds, to biological organisms, to societies was Dynamic Quality in action. The most recent stage of evolution – Intellectual Quality – refers to the symbolic models that human beings create to understand the universe. They exist in the mind, but are a part of reality all the same – they represent a continuation of the growth of Quality.

What many religions were missing, in Pirsig’s view, was not objectivity, but dynamism: an ability to correct old errors and achieve new insights. The advantage of science was its willingness and ability to change. According to Pirsig,

If scientists had simply said Copernicus was right and Ptolemy was wrong without any willingness to further investigate the subject, then science would have simply become another minor religious creed. But scientific truth has always contained an overwhelming difference from theological truth: it is provisional. Science always contains an eraser, a mechanism whereby new Dynamic insight could wipe out old static patterns without destroying science itself. Thus science, unlike orthodox theology, has been capable of continuous, evolutionary growth. (Lila, p. 222)

The notion that religion and orthodoxy go together is widespread among believers and secularists. But there is no necessary connection between the two. All religions originate in social processes of story-telling, dialogue, and selective borrowing from other cultures. In fact, many religions begin as dangerous heresies before they become firmly established — orthodoxies come later. The problem with most contemporary understandings of religion is that one’s adherence to religion is often measured by one’s commitment to orthodoxy and membership in religious institutions rather than an honest quest for what is really good.  A person who insists on the literal truth of the Bible and goes to church more than once a week is perceived as being highly religious, whereas a person not connected with a church but who nevertheless seeks religious knowledge wherever he or she can find it is considered less committed or even secular.  This prejudice has led many young people to identify as “spiritual, not religious,” but religious knowledge is not inherently about unwavering loyalty to an institution or a text. Pirsig believed that mysticism was a necessary component of religious knowledge and a means of disrupting orthodoxies and recovering the dynamic aspect of religious insight.

There is no denying that the most prominent disputes between science and religion in the last several centuries regarding the physical workings of the universe have resulted in a clear triumph for scientific knowledge over religious knowledge.  But the solution to false religious beliefs is not to discard religious knowledge — religious knowledge still offers profound insights beyond the scope of science. That is why it is necessary to recover the dynamic nature of religious knowledge through mysticism, correction of old beliefs, and reform. As Pirsig argued, “Good is a noun.” Not because Good is a thing or an object, but because Good  is the center and foundation of all reality and all forms of knowledge, whether we are consciously aware of it or not.

A Defense of the Ancient Greek Pagan Religion

In a previous post on the topic of mythos and logos, I discussed the evolution of ancient Greek thought from its origins in imaginative legends about gods to the development of reason, philosophy, and logic. Today, every educated human being knows about the contributions of Socrates, Plato, Euclid, and Pythagoras. But the ancient Greek religion appears to us as an embarrassment, something to be passed over in silence or laughed at. Indeed, it is difficult to read about the enormous plethora of Greek gods and goddesses and the ludicrous stories about their various activities without wondering how Greek civilization ever managed to accomplish the great things it accomplished while it was so mired in superstition.

I am not going to defend ancient Greek superstition. But I will say this: Greek religion was much more than mere superstition — it was about devotion to a greater good. According to the German scholar Werner Jaeger,”Areté was the central ideal of all Greek culture.” (Paideia: The Ideal of Greek Culture, Vol. I, p. 15). The word areté means “excellence,” and although in early Greek history it referred primarily to the virtues of the warrior-hero, by the time of Homer areté referred more broadly to all types of excellence. Areté was rooted in the mythos of ancient Greece, in the epic poetry of Hesiod and Homer, with the more philosophical logos emerging later.

This devotion of the Greeks to a greater good was powerful, even fanatical. Religion was so absolutely central to Greek life, that this ancient pre-industrial civilization spent enormous sums of money on temples, statues, and religious festivals, at a time when long hours of hard physical labor were necessary simply to keep from starving. However, at the same time, Greek religion was remarkably loose and liberal in it’s set of beliefs — there was not a single accepted doctrine, a written set of rules, or even a single sacred text, similar to the Torah, Bible, or Quran. The Greeks freely created a plethora of gods and stories about the gods and revised the stories as they wished. But the Greeks did insist upon the fundamental reality of a greater good and complete devotion to it. I will argue that this devotion was responsible for the enormous contributions of ancient Greece, and that a completely secular, rational Greece would not have accomplished nearly as much.

In order to understand my defense of ancient Greek religion, I think it is important to recognize that there are different types of knowledge. There is knowledge of natural causation and knowledge of history; but there is also esthetic knowledge (knowledge of the beautiful); moral knowledge; and knowledge of the proper goals and ends of human life. Greek religion failed in understanding natural causation and history, but often succeeded in these latter forms of knowledge. Greek religion was never merely a set of statements about the origins and history of the universe and the operations of nature. Rather, Greek religion was characterized by a number of other qualities. Greek religion was experiential, symbolic, celebratory, practical, and teleological. Let’s look at each of these features more closely.

Experiential. In order to understand Greek religion — or any religion, actually — one has to do more than simply absorb a set of statements of belief. One has to experience the presence of a greater good.

athena_parthenon

statue-of-zeus-olympia

The first picture above is of a 40-feet tall statue of the Greek goddess Athena in a life-size recreation of the ancient Greek Parthenon in Nashville, Tennessee. The second picture is a depiction of the probable appearance of the statue of Zeus at the Temple of Zeus in the sanctuary of Olympia, Greece, the site of the Olympic games.

Contrary to popular belief, Greek statues were not all white, but often painted in vivid colors, and sometimes adorned with gold, ivory, and precious stones. The size and beauty of the temple statues was meant to convey grandeur, and that is precisely the effect that they had. The statue of Zeus at Olympia has been listed among the Seven Wonders of the Ancient World. A Roman general who once saw the statue of Zeus declared that he “was moved to his soul, as if he had seen the god in person.” The Greek orator and philosopher Dio Chrysostom declared that a single glimpse of the statue of Zeus would make a man forget all his earthly troubles.

Symbolic. When the Greeks created sculptures of their gods, they were not really aiming for an accurate depiction of what their gods “really” looked like. The gods were spirits or powers; the gods were responsible for creating forms, and could appear in any form they wished, but in themselves gods had no human form. Indeed, in one myth, Zeus was asked by a mortal to reveal his true form; but Zeus’s true form was a thunderbolt, so when Zeus appeared as a thunderbolt, he incinerated the unfortunate person. Rather than depict the gods “realistically,” Greek sculptors sought to depict the gods symbolically, as the most beautiful human forms imaginable, male or female. These are metaphorical or analogical depictions, using personification to represent the gods.

I am not going to argue that all Greek religion was metaphorical — clearly, most Greeks believed in the gods as real, actual personalities. But there was a strong metaphorical aspect to Greek religious thought, and it is often difficult even for scholars to tell what parts of Greek religion were metaphorical and what parts were literal. For example, we know that the Greeks actually worshiped certain virtues and desired goods, such as “Peace,” “Victory,” “Love,” “Democracy,” “Health,” “Order,” and “Wealth.” The Greeks used personal forms to represent these virtues, and created statues, temples, and alters dedicated to them, but they did not see the virtues as literal personalities. Some of this symbolic representation of virtues survives to this day: the blindfolded Lady Justice, the statue of Freedom on the top of the U.S. Capitol building, and the Statue of Liberty are several personifications widely recognized in modern America. Some scholars have suggested that the main Greek gods began as personifications (i.e., “Zeus” was the personification of the sky) but that over time the gods came to be seen as full-fledged personalities. However, the lack of written records from the early periods in Greek history make it impossible to confirm or refute this claim.

Celebratory. Religion is often seen as a strict and solemn affair, and although Greek religion had some of these aspects, there was a strong celebratory aspect to Greek religion. The Greeks not only wanted to thank the gods for life and food and drink and love, they wanted to demonstrate their thanks and celebrate through feasts, festivals, and holidays. Indeed, it is probably the case that the only time most Greeks ate meat was after a ritual sacrifice of cattle or other livestock at the altar of a god. (Greek Religion, ed. Daniel Ogden, p. 402) In ancient Athens, about half of the days in the calendar were devoted to religious festivals and each god or goddess often had more than one festival.  The most famous religious festival was the festival devoted to Zeus, held every four years at the sanctuary of Olympia. The Greeks visited the temple of Zeus and prayed to their god — but also held games, celebrated the victors, and enjoyed feasts. The Greeks also held festivals devoted to the god Dionysus, the god of wine and ecstasy. Drink, music, theater, and dancing played a central role in Dionysian festivals.

Practical. When I was doing research on Greek religion, I came across a fascinating discussion on how the Greeks performed animal sacrifice. Allegedly, when the animals were slaughtered, the Greeks were obligated to share a portion of the animal with the gods by burning it on the altar. However, when the Greeks butchered the animal, they reserved all the meat for themselves and sacrificed only the bones, covered with a deceptive layer of fat, for the gods. It’s hard not to be somewhat amused by this. Why would the powerful, all-knowing gods be satisfied with the useless, inedible portions of an animal, while the Greeks kept the best parts for themselves? The Greeks even had a myth to justify this practice: allegedly Prometheus fooled Zeus into accepting the bones and fat, and from that original act, all future sacrifices were similarly justified. As devoted to the gods as the Greeks were, they were also practical; in a primitive society, meat was a rare and expensive commodity for most. Sacrifice was a symbolic act of devotion to the gods, but the Greeks were not prepared to go hungry by sacrificing half of their precious meat.

And what of prayer to the gods? Clearly, the Greeks prayed to the gods and asked favors of them. But prayer never stopped or even slowed Greek achievements in art, architecture, athletics, philosophy, and mathematics. No Greek ever entered the Olympic games fat and out-of-shape, hoping that copious prayers and sacrifices to Zeus would help him win the games. No Greek ever believed that one did not have to train hard for war, that prayers to their deity would suffice to save their city from destruction at the hands of an enemy. Nor did the Greeks expect incompetent agriculture or engineering would be saved by prayer. The Greeks sought inspiration, strength, and assistance from the gods, but they did not believe that prayer would substitute for their personal shortcomings and neglect.

Teleological (goal-oriented). In a previous essay, I discussed the role of teleology — explanation in terms of goals or purpose — in accounting for causation. Although modern science has largely dismissed teleological causation in favor of efficient causation, I argued that teleological, or goal-oriented, causation could have a significant role in understanding (1) the long-term development of the universe and (2) the behavior of life forms. In a teleological perspective, human beings are not merely the end result of chemical or atomic mechanisms — humans are able to partially transcend the parts they are made of and work toward certain goals or ends that they choose.

We misunderstand Greek religion when we think of it as being merely a collection of primitive beliefs about natural causation that has been superseded by science. The gods were not merely causal agents of thunderstorms, earthquakes, and plagues. They were representations of areté , idealized forms of human perfection that inspired and guided the Greeks. In the pantheon of major Greek gods, only one (Poseidon) is associated solely with natural causation, being responsible for the seas and for earthquakes. Eight of the gods were associated primarily with human qualities, activities, and institutions — love, beauty, music, healing, war, hunting, wisdom, marriage, childbirth, travel, language, and the home. Three gods were associated with both natural causation and human qualities, Zeus being responsible for thunder and lightning, as well as law and justice. The Greeks also honored and worshiped mortal heroes, extraordinary persons who founded a city, overthrew a tyrant, or won a war. Inventors, poets, and athletes were worshiped as well, not because they had the powers of the gods, but because they were worthy of emulation and were sources of inspiration. (“Heroes and Hero Cults,” Greek Religion, ed. Daniel Ogden, pp. 100-14)

At this point, you may well ask, can’t we devote ourselves to the goal of excellence by using reason? There is no need to read about myths and appeal to invisible superbeings that do not exist in order to pursue excellence. This argument is partly true, but it must be pointed out that reason in itself is an insufficient guide to what goods we should be devoted to. Esthetics, imagination, and faith provide us with goals that reason by itself can’t provide. Reason is a superb tool for thinking, but it is not an all-purpose tool.

You can see the limitations of pure reason in modern, secular societies. People don’t really spend much time thinking about the greater goods they should pursue, so they fall into the trap of materialism. Religion is considered a private affair, so it is not taught in public schools, and philosophy is considered a waste of time. So people tend to borrow their life goals from their surrounding culture and peer groups; from advertisers on television and the Internet; and from movie stars and famous musicians. People end up worshiping money, technology, and celebrities; they know those things are “real” because they are material, tangible, and because their culture tells them these things are important. But this worship without religion is only a different form of irrationality and superstition. As “real” as material goods are, they only provide temporary satisfaction, and there is never an amount of money or a house big enough or a car fancy enough or a celebrity admirable enough to bring us lasting happiness.

What the early Greeks understood is that reason in itself is a weak tool for directing the passions — only passions, rightly-ordered, can rule other passions. The Greeks also knew that excellence and beauty were real, even if the symbolic forms used to represent these realities were imprecise and imperfect. Finally, the Greeks understood that faith had causal potency — not in the sense that prayers could prevent an earthquake or a plague, but in the sense that attaining the heights of human achievement was possible only by total and unwavering commitment to a greater good, reinforced by ritual and habit. For the Greeks, reality was a work-in-progress: it didn’t consist merely of static “things” but of human possibilities and potential, the ability to be more than ourselves, to be greater than ourselves. However we want to symbolize it, devotion to a greater good is the first step to realizing that good. When we skip the first step, devotion, we shouldn’t be surprised when we fail to attain it.

What Does Science Explain? Part 5 – The Ghostly Forms of Physics

The sciences do not try to explain, they hardly even try to interpret, they mainly make models. By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of such a mathematical construct is solely and precisely that it is expected to work — that is, correctly to describe phenomena from a reasonably wide area. Furthermore, it must satisfy certain esthetic criteria — that is, in relation to how much it describes, it must be rather simple. — John von Neumann (“Method in the Physical Sciences,” in The Unity of Knowledge, 1955)

Now we come to the final part of our series of posts, “What Does Science Explain?” (If you have not already, you can peruse parts 1, 2, 3, and 4 here). As I mentioned in my previous posts, the rise of modern science was accompanied by a change in humanity’s view of metaphysics, that is, our theory of existence. Medieval metaphysics, largely influenced by ancient philosophers, saw human beings as the center or summit of creation; furthermore, medieval metaphysics proposed a sophisticated, multifaceted view of causation. Modern scientists, however, rejected much of medieval metaphysics as subjective and saw reality as consisting mainly of objects impacting or influencing each other in mathematical patterns.  (See The Metaphysical Foundations of Modern Science by E.A. Burtt.)

I have already critically examined certain aspects of the metaphysics of modern science in parts 3 and 4. For part 5, I wish to look more closely at the role of Forms in causation — what Aristotle called “formal causation.” This theory of causation was strongly influenced by Aristotle’s predecessor Plato and his Theory of Forms. What is Plato’s “Theory of Forms”? In brief, Plato argued that the world we see around us — including all people, trees, and animals, stars, planets and other objects — is not the true reality. The world and the things in it are imperfect and perishable realizations of perfect forms that are eternal, and that continually give birth to the things we see. That is, forms are the eternal blueprints of perfection which the material world imperfectly represents. True philosophers do not focus on the material world as it is, but on the forms that material things imperfectly reflect. In order to judge a sculpture, painting, or natural setting, a person must have an inner sense of beauty. In order to evaluate the health of a particular human body, a doctor must have an idea of what a perfectly healthy human form is. In order to evaluate a government’s system of justice, a citizen must have an idea about what perfect justice would look like. In order to critically judge leaders, citizens must have a notion of the virtues that such a leader should have, such as wisdom, honesty, and courage.  Ultimately, according to Plato, a wise human being must learn and know the perfect forms behind the imperfect things we see: we must know the Form of Beauty, the Form of Justice, the Form of Wisdom, and the ultimate form, the Form of Goodness, from which all other forms flow.

Unsurprisingly, many intelligent people in the modern world regard Plato’s Theory of Forms as dubious or even outrageous. Modern science teaches us that sure knowledge can only be obtained by observation and testing of real things, but Plato tells us that our senses are deceptive, that the true reality is hidden behind what we sense. How can we possibly confirm that the forms are real? Even Plato’s student Aristotle had problems with the Theory of Forms and argued that while the forms were real, they did not really exist until they were manifested in material things.

However, there is one important sense in which modern science retained the notion of formal causation, and that is in mathematics. In other words, most scientists have rejected Plato’s Theory of Forms in all aspects except for Plato’s view of mathematics. “Mathematical Platonism,” as it is called, is the idea that mathematical forms are objectively real and are part of the intrinsic order of the universe. However, there are also sharp disagreements on this subject, with some mathematicians and scientists arguing that mathematical forms are actually creations of the human imagination.

The chief difference between Plato and modern scientists on the study of mathematics is this: According to Plato, the objects of geometry — perfect squares, perfect circles, perfect planes — existed nowhere in the material world; we only see imperfect realizations. But the truly wise studied the perfect, eternal forms of geometry rather than their imperfect realizations. Therefore, while astronomical observations indicated that planetary bodies orbited in imperfect circles, with some irregularities and errors, Plato argued that philosophers must study the perfect forms instead of the actual orbits! (The Republic, XXVI, 524D-530C) Modern science, on the other hand, is committed to observation and study of real orbits as well as the study of perfect mathematical forms.

Is it tenable to hold the belief that Plato and Aristotle’s view of eternal forms is mostly subjective nonsense, but they were absolutely right about mathematical forms being real? I argue that this selective borrowing of the ancient Greeks doesn’t quite work, that some of the questions and difficulties with proving the reality of Platonic forms also afflicts mathematical forms.

The main argument for mathematical Platonism is that mathematics is absolutely necessary for science: mathematics is the basis for the most important and valuable physical laws (which are usually in the form of equations), and everyone who accepts science must agree that the laws of nature or the laws of physics exist. However, the counterargument to this claim is that while mathematics is necessary for human beings to conduct science and understand reality, that does not mean that mathematical objects or even the laws of nature exist objectively, that is, outside of human minds.

I have discussed some of the mysterious qualities of the “laws of nature” in previous posts (here and here). It is worth pointing out that there remains a serious debate among philosophers as to whether the laws of nature are (a) descriptions of causal regularities which help us to predict or (b) causal forces in themselves. This is an important distinction that most people, including scientists, don’t notice, although the theoretical consequences are enormous. Physicist Kip Thorne writes that laws “force the Universe to behave the way it does.” But if laws have that kind of power, they must be ubiquitous (exist everywhere), eternal (exist prior to the universe), and have enormous powers although they have no detectable energy or mass — in other words, the laws of nature constitute some kind of supernatural spirit. On the other hand, if laws are summary descriptions of causation, these difficulties can be avoided — but then the issue arises: do the laws of nature or of physics really exist objectively, outside of human minds, or are they simply human-constructed statements about patterns of causation? There are good reasons to believe the latter is true.

The first thing that needs to be said is that nearly all these so-called laws of nature are actually approximations of what really happens in nature, approximations that work only under certain restrictive conditions. Both of these considerations must be taken into account, because even the approximations fall apart outside of certain pre-specified conditions. Newton’s law of universal gravitation, for example, is not really universal. It becomes increasingly inaccurate under conditions of high gravity and very high velocities, and at the atomic level, gravity is completely swamped by other forces. Whether one uses Newton’s law depends on the specific conditions and the level of accuracy one requires. Kepler’s laws of planetary motion are an approximation based on the simplifying assumption of a planetary system consisting of one planet. The ideal gas law is an approximation which becomes inaccurate under conditions of low temperature and/or high pressure. The law of multiple proportions works for simple molecular compounds, but often fails for complex molecular compounds. Biologists have discovered so many exceptions to Mendel’s laws of genetics that some believe that Mendel’s laws should not even be considered laws.

The fact of the matter is that even with the best laws that science has come up with, we still can’t predict the motions of more than two interacting astronomical bodies without making unrealistic simplifying assumptions. Michael Scriven, a mathematician and philosopher at Claremont Graduate University, has concluded that the laws of nature or physics are actually cobbled together by scientists based on multiple criteria:

Briefly we may say that typical physical laws express a relationship between quantities or a property of systems which is the simplest useful approximation to the true physical behavior and which appears to be theoretically tractable. “Simplest” is vague in many cases, but clear for the extreme cases which provide its only use. “Useful” is a function of accuracy and range and purpose. (Michael Scriven, “The Key Property of Physical Laws — Inaccuracy,” in Current Issues in the Philosophy of Science, ed. Herbert Feigl)

The response to this argument is that it doesn’t disprove the objective existence of physical laws — it simply means that the laws that scientists come up with are approximations to real, objectively existing underlying laws. But if that is the case, why don’t scientists simply state what the true laws are? Because the “laws” would actually end up being extremely long and complex statements of causation, with so many conditions and exceptions that they would not really be considered laws.

An additional counterargument to mathematical Platonism is that while mathematics is necessary for science, it is not necessary for the universe. This is another important distinction that many people overlook. Understanding how things work often requires mathematics, but that doesn’t mean the things in themselves require mathematics. The study of geometry has given us pi and the Pythagorean theorem, but a child does not need to know these things in order to draw a circle or a right triangle. Circles and right triangles can exist without anyone, including the universe, knowing the value of pi or the Pythagorean theorem. Calculus was invented in order to understand change and acceleration; but an asteroid, a bird, or a cheetah is perfectly capable of changing direction or accelerating without needing to know calculus.

Even among mathematicians and scientists, there is a significant minority who have argued that mathematical objects are actually creations of the human imagination, that math may be used to model aspects of reality, but it does not necessarily do so. Mathematicians Philip J. Davis and Reuben Hersh argue that mathematics is the study of “true facts about imaginary objects.” Derek Abbot, a professor of engineering, writes that engineers tend to reject mathematical Platonism: “the engineer is well acquainted with the art of approximation. An engineer is trained to be aware of the frailty of each model and its limits when it breaks down. . . . An engineer . . . has no difficulty in seeing that there is no such a thing as a perfect circle anywhere in the physical universe, and thus pi is merely a useful mental construct.” (“The Reasonable Ineffectiveness of Mathematics“) Einstein himself, making a distinction between mathematical objects used as models and pure mathematics, wrote that “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” Hartry Field, a philosopher at New York University, has argued that mathematics is a useful fiction that may not even be necessary for science. Field goes to show that it is possible to reconstruct Newton’s theory of gravity without using mathematics. (There is more discussion on this subject here and here.)

So what can we conclude about the existence of forms? I have to admit that although I’m skeptical, I have no sure conclusions. It seems unlikely that forms exist outside the mind . . . but I can’t prove they don’t exist either. Forms do seem to be necessary for human reasoning — no thinking human can do without them. And forms seem to be rooted in reality: perfect circles, perfect squares, and perfect human forms can be thought of as imaginative projections of things we see, unlike Sherlock Holmes or fire-breathing dragons or flying spaghetti monsters, which are more creatively fictitious. Perhaps one could reconcile these opposing views on forms by positing that the human mind and imagination is part of the universe itself, and that the universe is becoming increasingly consciously aware.

Another way to think about this issue was offered by Robert Pirsig in Zen and the Art of Motorcycle Maintenance. According to Pirsig, Plato made a mistake by positing Goodness as a form. Even considered as the highest form, Goodness (or “Quality,” in Pirsig’s terminology) can’t really be thought of as a static thing floating around in space or some otherworldly realm. Forms are conceptual creations of humans who are responding to Goodness (Quality). Goodness itself is not a form, because it is not an unchanging thing — it is not static or even definable. It is “reality itself, ever changing, ultimately unknowable in any kind of fixed, rigid way.” (p. 342) Once we let go of the idea that Goodness or Quality is a form, we can realize that not only is Goodness part of reality, it is reality.

As conceptual creations, ideal forms are found in both science and religion. So why, then, does there seem to be such a sharp split between science and religion as modes of knowledge? I think it comes down to this: science creates ideal forms in order to model and predict physical phenomena, while religion creates ideal forms in order to provide guidance on how we should live.

Scientists like to see how things work — they study the parts in order to understand how the wholes work. To increase their understanding, scientists may break down certain parts into smaller parts, and those parts into even smaller parts, until they come to the most fundamental, indivisible parts. Mathematics has been extremely useful in modeling and understanding these parts of nature, so scientists create and appreciate mathematical forms.

Religion, on the other hand, tends to focus on larger wholes. The imaginative element of religion envisions perfect states of being, whether it be the Garden of Eden or the Kingdom of Heaven, as well as perfect (or near perfect) humans who serve as prophets or guides to a better life. Religion is less concerned with how things work than with how things ought to work, how things ought to be. So religion will tend to focus on subjects not covered by science, including the nature and meaning of beauty, love, and justice. There will always be debates about the appropriateness of particular forms in particular circumstances, but the use of forms in both science and religion is essential to understanding the universe and our place in it.

What Does Science Explain? Part 4 – The Ends of the Universe

Continuing my series of posts on “What Does Science Explain?” (parts 1, 2 , and 3 here), I wish today to discuss the role of teleological causation. Aristotle referred to teleology in his discussion of four causes as “final causation,” because it referred to the goals or ends of all things (the Greek word “telos” meaning “goal,” “purpose,” or “end.”) From a teleological viewpoint, an acorn grows into an oak tree, a bird takes flight, and a sculptor creates statues because these are the inherent and intended ends of the acorn, bird, and sculptor. Medieval metaphysics granted a large role for teleological causation in its view of the universe.

According to E.A. Burtt in The Metaphysics of Modern Science, the growth of modern science changed the idea of causation, focusing almost exclusively on efficient causation (objects impacting or affecting other objects). The idea of final (goal-oriented) causation was dismissed. And even though the early modern scientists such as Galileo and Newton believed in God, their notion of God was significantly different from the traditional medieval conception of God. Rather than seeing God as the Supreme Good, which continually draws all things to higher levels of being, early modern scientists reduced God to the First Efficient Cause, who merely started the mechanism of the universe and then let it run.

It was not unreasonable for early scientists to focus on efficient causation rather than final causation. It was often difficult to come up with testable hypotheses and workable predictive models by assuming long-term goals in nature. There was always a strong element of mystery about what the true ends of nature were and it was very difficult to pin down these alleged goals. Descartes believed in God, but also wrote that it was impossible to know what God’s goals were. For that reason, it is quite likely that science in its early stages needed to overcome medieval metaphysics in order to make its first great discoveries about nature. Focusing on efficient causation was simpler and apt to bring quicker results.

However, now that science has advanced over the centuries, it is worth revisiting the notion of teleological causation as a means of filling in gaps in our current understanding of nature. It is true that the concept of long-term goals for physical objects and forces often does not help very much in terms of developing useful, short-term predictive models. But final causation can help make sense of long-term patterns which may not be apparent when making observations over short periods of time. Processes that look purposeless and random in the short-term may actually be purposive in the long-term. We know that an acorn under the right conditions will eventually become an oak tree, because the process and the outcome of development can be observed within a reasonable period of time and that knowledge has been passed on to us. If our knowledge base began at zero and we came across an acorn for the first time, we would find it extremely difficult to predict the long-term future of that acorn merely by cutting it up and examining it under a microscope.

So, does the universe have long-term, goal-oriented patterns that may be hidden among the short-term realities of contingency and randomness? A number of physicists began to speculate that this was the case in the late twentieth century, when their research indicated that the physical forces and constants of the universe can exist in only a very narrow range of possibilities in order for life to be possible, or even for the universe to exist. Change in even one of the forces or constants could make life impossible or cause the universe to self-destruct in a short period of time. In this view, the evolution of the universe and of life on earth has been subject to a great deal of randomness, but the cosmic structure and conditions that made evolution possible are not at all random. As the physicist Freeman Dyson has noted:

It is true that we emerged in the universe by chance, but the idea of chance is itself only a cover for our ignorance. . . . The more I examine the universe and study the details of its architecture, the more evidence I find that the universe in some sense must have known that we were coming. (Disturbing the Universe, p. 250)

In what way did the universe “know we were coming?” Consider the fact that in the early universe after the Big Bang, the only elements that existed were the “light” elements hydrogen and helium, along with trace amounts of lithium and beryllium. A universe with only four elements would certainly be simple, but there would not be much to build upon. Life, at least as we know it, requires not just hydrogen but at a minimum carbon, oxygen, nitrogen, phosphorus, and sulfur. How did these and other heavier elements come into being? Stars produced them, through the process of fusion. In fact, stars have been referred to as the “factories” of heavy elements. Human beings today consist primarily of oxygen, followed by carbon, hydrogen, nitrogen, calcium, and phosphorous. Additional elements compose less than one percent of the human body, but even most of these elements are essential to human life. Without the elements produced earlier by stars we would not be here. It has been aptly said that human beings are made of “stardust.”

So why did stars create the heavier elements? After all, the universe could have gotten along quite well without additional elements. Was it random chance that created the heavy elements? Not really. Random chance plays a role in many natural events, but the creation of heavy elements in stars requires some precise conditions — it is not just a churning jumble of subatomic particles. The astronomer Fred Hoyle was the first scientist to study how stars made heavy elements, and he noted that the creation of heavy elements required very specific values in order for the process to work. When he concluded his research Hoyle remarked, “A common sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature. The numbers one calculates from the facts seem to me so overwhelming as to put this conclusion almost beyond question.”

The creation of heavier elements by the stars does not necessarily mean that the universe intended specifically to create human beings, but it does seem to indicate that the universe somehow “knew” that heavy elements would be required to create higher forms of being, above and beyond the simple and primitive elements created by the Big Bang. In that sense, creating life is plausibly a long-term goal of the universe.

And what about life itself? Does it make sense to use teleology to study the behavior of life forms? Biologist Peter Corning has argued that while science has long pursued reductionist explanations of phenomena, it is impossible to really know biological systems without pursuing holistic explanations centered on the purposive behavior of organisms.

According to reductionism, all things can be explained by the parts that they are made of — human beings are made of tissues and organs, which are made of cells, which are made of chemical compounds, which are made of atoms, which are made of subatomic particles. In the view of many scientists, everything about human beings can in principle be explained by actions at the subatomic level. Peter Corning, however, argues that this conception is mistaken. Reductionism is necessary for partially explaining biological systems, but it is not sufficient. The reason for this is that the wholes are greater than the parts, and the behavior of wholes often has characteristics that are radically different from the parts that they are made of. For example, it would be dangerous to add pure hydrogen or oxygen to a fire, but when hydrogen atoms and oxygen atoms are combined in the right way — as H2O — one obtains a chemical compound that is quite useful for extinguishing fires. The characteristics of the molecule are different from the characteristics of the atoms in it. Likewise, at the subatomic level, particles may have no definite position in space and can even be said to exist in multiple places at once; but human beings only exist in one place at a time, despite the fact that human beings are made of subatomic particles. The behavior of the whole is different from the behavior of the parts. The transformation of properties that occurs when parts form new wholes is known as “emergence.”

Corning notes that when one incorporates analysis of wholes into theoretical explanation, there is goal-oriented “downward causation” as well as “upward causation.” For example, a bird seeks the goal of food and a favorable environment, so when it begins to get cold, that bird flies thousands of miles to a warmer location for the winter. The atoms that make up that bird obviously go along for the ride, but a scientist can’t use the properties of the atoms to predict the flight of these atoms; only by looking at the properties of the bird as a whole can a scientist predict what the atoms making up the bird are going to do. The bird as a whole doesn’t have complete control over the atoms composing its body, but it clearly has some control. Causation goes down as well as up. Likewise, neuropsychologist Roger Sperry has argued that human consciousness is a whole that influences the parts of the brain and body just as the parts of the brain and body influence the consciousness: “[W]e contend that conscious or mental phenomena are dynamic, emergent, pattern (or configurational) properties of the living brain in action . . . these emergent pattern properties in the brain have causal control potency. . . ” (“Mind, Brain, and Humanist Values,” Bulletin of the Atomic Scientists, Sept 1966) In Sperry’s view, the values created by the human mind influence human behavior as much as the atoms and chemicals in the human body and brain.

Science has traditionally viewed the evolution of the universe as upward causation only, with smaller parts joining into larger wholes as a result of the laws of nature and random chance. This view of causation is illustrated in the following diagram:

reductionism

But if we take seriously the notion of emergence and purposive action, we have a more complex picture, in which the laws of nature and random chance constrain purposive action and life forms, but do not entirely determine the actions of life forms — i.e., there is both upward and downward causation:

reductionism_and_holism

It is important to note that this new view of causation does not eliminate the laws of nature — it just sets limits on what the laws of nature can explain. Specifically, the laws of nature have their greatest predictive power when we are dealing with the simplest physical phenomena; the complex wholes that are formed by the evolutionary process are less predictable because they can to some extent work around the laws of nature by employing the new properties that emerge from the joining of parts. For example, it is relatively easy to predict the motion of objects in the solar system by using the laws of nature; it is not so easy to predict the motion of life forms because life forms have properties that go beyond the simple properties possessed by objects in the solar system. As Robert Pirsig notes in Lila, life can practically be defined by its ability to transcend or work around the static patterns of the laws of nature:

The law of gravity . . . is perhaps the most ruthlessly static pattern of order in the universe. So, correspondingly, there is no single living thing that does not thumb its nose at that law day in and day out. One could almost define life as the organized disobedience of the law of gravity. One could show that the degree to which an organism disobeys this law is a measure of its degree of evolution. Thus, while the single protozoa just barely get around on their cilia, earthworms manage to control their distance and direction, birds fly into the sky, and man goes all the way to the moon. (Lila (1991), p. 143.

Many scientists still resist the notion of teleological causation. But it could be argued that even scientists who vigorously deny that there is any purpose in the universe actually have an implicit teleology. Their teleology is simply the “laws of nature” themselves, and either the inner goal of all things is to follow those laws, or it is the goal of the laws to compel all things to follow their commands. Other implicit teleologies can be found in scientists’ assumptions that nature is inherently simple; that mathematics is the language of nature; or that all the particles and forces in the nature play some necessary role. According to physicist Paul Davies,

There is . . . an unstated but more or less universal feeling among physicists that everything that exists in nature must have a ‘place’ or a role as part of some wider scheme, that nature should not indulge in profligacy by manifesting gratuitous entities, that nature should not be arbitrary. Each facet of physical reality should link in with the others in a ‘natural’ and logical way. Thus, when the particle known as the muon was discovered in 1937, the physicist Isidor Rabi was astonished. ‘Who ordered that?’ he exclaimed. (Paul Davies, The Mind of God: The Scientific Basis for a Rational World, pp. 209-10.

Ultimately, however, one cannot fully discuss the goals or ends of the universe without exploring the notion of Ideal Forms — that is, a blueprint for all things to follow or aspire to. The subject of Ideal Forms will be discussed in my next post.

What Does Science Explain? Part 3 – The Mythos of Objectivity

In parts one and two of my series “What Does Science Explain?,” I contrasted the metaphysics of the medieval world with the metaphysics of modern science. The metaphysics of modern science, developed by Kepler, Galileo, Descartes, and Newton, asserted that the only true reality was mathematics and the shape, motion, and solidity of objects, all else being subjective sensations existing solely within the human mind. I pointed out that the new scientific view was valuable in developing excellent predictive models, but that scientists made a mistake in elevating a method into a metaphysics, and that the limitations of the metaphysics of modern science called for a rethinking of the modern scientific worldview. (See The Metaphysical Foundations of Modern Science by Edwin Arthur Burtt.)

Early scientists rejected the medieval worldview that saw human beings as the center and summit of creation, and this rejection was correct with regard to astronomical observations of the position and movement of the earth. But the complete rejection of medieval metaphysics with regard to the role of humanity in the universe led to a strange division between theory and practice in science that endures to this day. The value and prestige of science rests in good part on its technological achievements in improving human life. But technology has a two-sided nature, a destructive side as well as a creative side. Aspects of this destructive side include automatic weaponry, missiles, conventional explosives, nuclear weapons, biological weapons, dangerous methods of climate engineering, perhaps even a threat from artificial intelligence. Even granting the necessity of the tools of violence for deterrence and self-defense, there remains the question of whether this destructive technology is going too far and slipping out of our control. So far the benefits of good technology have outweighed the hazards of destructive technology, but what research guidance is offered to scientists when human beings are removed from their high place in the universe and human values are separated from the “real” world of impersonal objects?

Consider the following question: Why do medical scientists focus their research on the treatment and cure of illness in humans rather than the treatment and cure of illness in cockroaches or lizards? This may seem like a silly question, but there’s no purely objective, scientific reason to prefer one course of research over another; the metaphysics of modern science has already disregarded the medieval view that humans have a privileged status in the universe. One could respond by arguing that human beings have a common self-interest in advancing human health through medical research, and this self-interest is enough. But what is the scientific justification for the pursuit of self-interest, which is not objective anyway? Without a recognition of the superior value of human life, medical science has no research guidance.

Or consider this: right now, astronomers are developing and employing advanced technologies to detect other worlds in the galaxy that may have life. The question of life on other planets has long interested astronomers, but it was impossible with older technologies to adequately search for life. It would be safe to say that the discovery of life on another planet would be a landmark development in science, and the discovery of intelligent life on another planet would be an astonishing development. The first scientist who discovered a world with intelligent life would surely win awards and fame. And yet, we already have intelligent life on earth and the metaphysics of modern science devalues it. In practice, of course, most scientists do value human life; the point is, the metaphysics behind science doesn’t, leaving scientists at a loss for providing an intellectual justification for a research program that protects and advances human life.

A second limitation of modern science’s metaphysics, closely related to the first, is its disregard of certain human sensations in acquiring knowledge. Early scientists promoted the view that only the “primary qualities” of mathematics, shape, size, and motion were real, while the “secondary qualities” of color, taste, smell, and sound existed only in the mind. This distinction between primary and secondary qualities was criticized at the time by philosophers such as George Berkeley, a bishop of the Anglican Church. Berkeley argued that the distinction between primary and secondary qualities was false and that even size, shape, and motion were relative to the perceptions and judgment of observers. Berkeley also opposed Isaac Newton’s theory that space and time were absolute entities, arguing instead that these were ideas rooted in human sensations. But Berkeley was disregarded by scientists, largely because Newton offered predictive models of great value.

Three hundred years later, Isaac Newton’s models retain their great value and are still widely used — but it is worth noting that Berkeley’s metaphysics has actually proved superior in many respects to Newton’s metaphysics.

Consider the nature of mathematics. For many centuries mathematicians believed that mathematical objects were objectively real and certain and that Euclidean geometry was the one true geometry. However, the discovery of non-Euclidean geometries in the nineteenth century shook this assumption, and mathematicians had to reconcile themselves to the fact that it was possible to create multiple geometries of equal validity. There were differences between the geometries in terms of their simplicity and their ability to solve particular problems, but no one geometry was more “real” than the others.

If you think about it, this should not be surprising. The basic objects of geometry — points, lines, and planes — aren’t floating around in space waiting for you to take note of them. They are concepts, creations of the human brain. We may see particular objects that resemble points, lines, and planes, but space itself has no visible content; we have to add content to it.  And we have a choice in what content to use. It is possible to create a geometry in which all lines are straight or all lines are curved; in which some lines are parallel or no lines are parallel;  or in which lines are parallel over a finite distance but eventually meet at some infinitely great distance. It is also possible to create a geometry with axioms that assume no lines, only points; or a geometry that assumes “regions” rather than points. So the notion that mathematics is a “primary quality” that exists within objects independent of human minds is a myth. (For more on the imaginary qualities of mathematics, see my previous posts here and here.)

But aside from the discovery of multiple mathematical systems, what has really killed the artificial distinction between “primary qualities,” allegedly objective, and “secondary qualities,” allegedly subjective, is modern science itself, particularly in the findings of relativity theory and quantum mechanics.

According to relativity theory, there is no single, objectively real size, shape, or motion of objects — these qualities are all relative to an observer in a particular reference frame (say, at the same location on earth, in the same vehicle, or in the same rocket ship). Contrary to some excessive and simplistic views, relativity theory does NOT mean that any and all opinions are equally valid. In fact, all observers within the same reference frame should be seeing the same thing and their measurements should match. But observers in different reference frames may have radically different measurements of the size, shape, and motion of an object, and there is no one single reference frame that is privileged — they are all equally valid.

Consider the question of motion. How fast are you moving right now? Relative to your computer or chair, you are probably still. But the earth is rotating at 1040 miles per hour, so relative to an observer on the moon, you would be moving at that speed — adjusting for the fact that the moon is also orbiting around the earth at 2288 miles per hour. But also note that the earth is orbiting the sun at 66,000 miles per hour, our solar system is orbiting the galaxy at 52,000 miles per hour, and our galaxy is moving at 1,200,000 miles per hour; so from the standpoint of an observer in another galaxy you are moving at a fantastically fast speed in a series of crazy looping motions. Isaac Newton argued that there was an absolute position in space by which your true, objective speed could be measured. But Einstein dismissed that view, and the scientific consensus today is that Einstein was right — the answer to the question of how fast you are moving is relative to the location and speed of the observer.

The relativity of motion was anticipated by the aforementioned George Berkeley as early as the eighteenth century, in his Treatise Concerning the Principles of Human Knowledge (paragraphs 112-16). Berkeley’s work was subsequently read by the physicist Ernest Mach, who subsequently influenced Einstein.

Relativity theory also tells us that there is no absolute size and shape, that these also vary according to the frame of reference of an observer in relation to what is observed. An object moving at very fast speeds relative to an observer will be shortened in length, which also affects its shape. (See the examples here and here.) What is the “real” size and shape of the object? There is none — you have to specify the reference frame in order to get an answer. Professor Richard Wolfson, a physicist at Middlebury College who has a great lecture series on relativity theory, explains what happens at very fast speeds:

An example in which length contraction is important is the Stanford Linear Accelerator, which is 2 miles long as measured on Earth, but only about 3 feet long to the electrons moving down the accelerator at 0.9999995c [nearly the speed of light]. . . . [Is] the length of the Stanford Linear Accelerator ‘really’ 2 miles? No! To claim so is to give special status to one frame of reference, and that is precisely what relativity precludes. (Course Guidebook to Einstein’s Relativity and the Quantum Revolution, Lecture 10.)

In fact, from the perspective of a light particle (a photon), there is infinite length contraction — there is no distance and the entire universe looks like a point!

The final nail in the coffin of the metaphysics of modern science is surely the weird world of quantum physics. According to quantum physics, particles at the subatomic level do not occupy only one position at a particular moment of time but can exist in multiple positions at the same time — only when the subatomic particles are observed do the various possibilities “collapse” into a single outcome. This oddity led to the paradoxical thought experiment known as “Schrodinger’s Cat” (video here). The importance of the “observer effect” to modern physics is so great that some physicists, such as the late physicist John Wheeler, believed that human observation actually plays a role in shaping the very reality of the universe! Stephen Hawking holds a similar view, arguing that our observation “collapses” multiple possibilities into a single history of the universe: “We create history by our observation, rather than history creating us.” (See The Grand Design, pp. 82-83, 139-41.) There are serious disputes among scientists about whether uncertainties at the subatomic level really justify the multiverse theories of Wheeler and Hawking, but that is another story.

Nevertheless, despite the obsolescence of the metaphysical premises of modern science, when scientists talk about the methods of science, they still distinguish between the reality of objects and the unreality of what exists in the mind, and emphasize the importance of being objective at all times. Why is that? Why do scientists still use a metaphysics developed centuries ago by Kepler, Galileo, and Newton? I think this practice persists largely because the growth of knowledge since these early thinkers has led to overspecialization — if one is interested in science, one pursues a degree in chemistry, biology, or physics; if one is interested in metaphysics, one pursues a degree in philosophy. Scientists generally aren’t interested in or can’t understand what philosophers have to say, and philosophers have the same view of scientists. So science carries on with a metaphysics that is hundreds of years old and obsolete.

It’s true that the idea of objectivity was developed in response to the very real problem of the uncertainty of human sense impressions and the fallibility of the conclusions our minds draw in response to those sense impressions. Sometimes we think we see something, but we don’t. People make mistakes, they may see mirages; in extreme cases, they may hallucinate. Or we see the same thing but have different interpretations. Early scientists tried to solve this problem by separating human senses and the human mind from the “real” world of objects. But this view was philosophically dubious to begin with and has been refuted by science itself. So how do we resolve the problem of mistaken and differing perceptions and interpretations?

Well, we supplement our limited senses and minds with the senses and minds of other human beings. We gather together, we learn what others have perceived and concluded, we engage in dialogue and debate, we conduct repeated observations and check our results with the results of others. If we come to an agreement, then we have a tentative conclusion; if we don’t agree, more observation, testing, and dialogue is required to develop a picture that resolves the competing claims. In some cases we may simply end up with an explanation that accounts for why we come up with different conclusions — perhaps we are in different locations, moving at different speeds, or there is something about our sensory apparatus that causes us to sense differently. (There is an extensive literature in science about why people see colors differently due to the nature of the eye and brain.)

Central to the whole process of science is a common effort — but there is also the necessity of subduing one’s ego, acknowledging that not only are there other people smarter than we are, but that the collective efforts of even less-smart people are greater than our own individual efforts. Subduing one’s ego is also required in order to prepare for the necessity of changing one’s mind in response to new evidence and arguments. Ultimately, the search for knowledge is a social and moral enterprise. But we are not going to succeed in that endeavor by positing a reality separate from human beings and composed only of objects. (Next: Part 4)