The Metaphor of “Mechanism” in Science

The writings of science make frequent use of the metaphor of “mechanism.” The universe is conceived as a mechanism, life is a mechanism, and even human consciousness has been described as a type of mechanism. If a phenomenon is not an outcome of a mechanism, then it is random. Nearly everything science says about the universe and life falls into the two categories of mechanism and random chance.

The use of the mechanism metaphor is something most of us hardly ever notice. Science, allegedly, is all about literal truth and precise descriptions. Metaphors are for poetry and literature. But in fact mathematics and science use metaphors. Our understandings of quantity, space, and time are based on metaphors derived from our bodily experiences, as George Lakoff and Rafael Nunez have pointed out in their book Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being  Theodore L. Brown, a professor emeritus of chemistry at the University of Illinois at Urbana-Champaign, has provided numerous examples of scientific metaphors in his book, Making Truth: Metaphor in Science. Among these are the “billiard ball” and “plum pudding” models of the atom, as well as the “energy landscape” of protein folding. Scientists envision cells as “factories” that accept inputs and produce goods. The genetic structure of DNA is described as having a “code” or “language.” The term “chaperone proteins” was invented to describe proteins that have the job of assisting other proteins to fold correctly.

What I wish to do in this essay is closely examine the use of the mechanism metaphor in science. I will argue that this metaphor has been extremely useful in advancing our knowledge of the natural world, but its overuse as a descriptive and predictive model has led us down the wrong path to fully understanding reality — in particular, understanding the actual nature of life.

____________________________

Thousands of years ago, human beings attributed the actions of natural phenomena to spirits or gods. A particular river or spring or even tree could have its own spirit or minor god. Many humans also believed that they themselves possessed a spirit or soul which occupied the body, gave the body life and motion and intelligence, and then departed when the body died. According to the Bible, Genesis 2:7, when God created Adam from the dust of the ground, God “breathed into his nostrils the breath of life; and man became a living soul.” Knowing very little of biology and human anatomy, early humans were inclined to think that spirit/breath gave life to material bodies; and when human bodies no longer breathed, they were dead, so presumably the “spirit” went someplace else. The ancient Hebrews also saw a role for blood in giving life, which is why they regarded blood as sacred. Thus, the Hebrews placed many restrictions on the consumption and handling of blood when they slaughtered animals for sacrifice and food. These views about the spiritual aspects of breath and blood are also the historical basis of “vitalism,” the theory that life consists of more than material parts, and must somehow be based on a vital principle, spark, or force, in addition to matter. 

The problem with the vitalist outlook is that it did not appreciably advance our knowledge of nature and the human body.  The idea of a vital principle or force was too vague and could not be tested or measured or even observed. Of course, humans did not have microscopes thousands of years ago, so we could not see cells and bacteria, much less atoms.

By the 17th century, thinkers such as Thomas Hobbes and Rene Descartes proposed that the universe and even life forms were types of mechanisms, consisting of many parts that interacted in such a way as to result in predictable patterns. The universe was often analogized to a clock. (The first mechanical clock was developed around 1300 A.D., but water clocks, based on the regulated flow of water, have been in use for thousands of years.) The great French scientist Pierre-Simon Laplace was an enthusiast for the mechanist viewpoint and even argued that the universe could be regarded as completely determined from its beginnings:

We may regard the present state of the universe as the effect of the past and the cause of the future. An intellect which at any given moment knew all of the forces that animate nature and the mutual positions of the beings that compose it, if this intellect were vast enough to submit the data to analysis, could condense into a single formula the movement of the greatest bodies of the universe and that of the lightest atom; for such an intellect nothing could be uncertain and the future just like the past would be present before its eyes. (A Philosophical Essay on Probabilities, Chapter Two)

Laplace’s radical determinism was not embraced by all scientists, but it was a common view among many scientists. Later, as the science of biology developed, it was argued that the evolution of life was not as determined as the motion of the planets. Rather, random genetic mutations resulted in new life forms and “natural selection” determined that fit life forms flourished and reproduced, while unfit forms died out. In this view, physical mechanisms combined with random chance explained evolution.

The astounding advances in physics and biology in the past centuries certainly seem to justify the mechanism metaphor. Reality does seem to consist of various parts that interact in predictable cause-and-effect patterns. We can predict the motions of objects in space, and build technologies that send objects in the right direction and speed to the right target. We can also methodically trace illnesses to a dysfunction in one or more parts of the body, and this dysfunction can often be treated by medicine or surgery.

But have we been overusing the mechanism metaphor? Does reality consist of nothing but determined and predictable cause-and-effect patterns with an element of random chance mixed in?

I believe that we can shed some light on this subject by first examining what mechanisms are — literally — and then examine what resemblances and differences there are between mechanisms and the actual universe, between mechanisms and actual life.

____________________

 

Even in ancient times, human beings created mechanisms, from clocks to catapults to cranes to odometers. The Antikythera mechanism of ancient Greece, constructed around 100 B.C., was a sophisticated mechanism with over 30 gears that was able to predict astronomical motions and is considered to be one of the earliest computers. Below is a photo of a fragment of the mechanism, discovered in an ocean shipwreck in 1901:

 

Over subsequent centuries, human civilization created steam engines, propeller-driven ships, automobiles, airplanes, digital watches, computers, robots, nuclear reactors, and spaceships.

So what do most or all of these mechanisms have in common?

  1. Regularity and Predictability. Mechanisms have to be reliable. They have to do exactly what you want every time. Clocks can’t run fast, then run slow; automobiles can’t unilaterally change direction or speed; nuclear reactors can’t overheat on a whim; computers have to give the right answer every time. 
  2. Precision. The parts that make up a mechanism must fit together and move together in precise ways, or breakdown, or even disaster, will result. Engineering tolerances are typically measured in millimeters.
  3. Stability and Durability. Mechanisms are often made of metal, and for good reason. Metal can endure extreme forces and temperatures, and, if properly maintained, can last for many decades. Metal can slightly expand and contract depending on temperature, and metals can have some flexibility when needed, but metallic constructions are mostly stable in shape and size. 
  4. Unfree/Determined. Mechanisms are built by humans for human purposes. When you manage the controls of a mechanism correctly, the results are predictable. If you get into your car and decide to drive north, you will drive north. The car will not dispute you or override your commands, unless it is programmed to override your commands, in which case it is simply following a different set of instructions. The car has no will of its own. Human beings would not build mechanisms if such mechanisms acted according to their own wills. The idea of a self-willing mechanism is prolific in science fiction, but not in science.
  5. They do not grow. Mechanisms do not become larger over time or change their basic structure like living organisms. This would be contrary to the principle of durability/stability. Mechanisms are made for a purpose, and if there is a new purpose, a new mechanism will be made.
  6. They do not reproduce. Mechanisms do not have the power of reproduction. If you put a mechanism into a resource-rich environment, it will not consume energy and materials and give birth to new mechanisms. Only life has this power. (A partial exception can be made in the case of  computer “viruses,” which are lines of code programmed to duplicate themselves, but the “viruses” are not autonomous — they do the bidding of the programmer.)
  7. Random events lead to the universal degradation of mechanisms, not improvement. According to neo-Darwinism, random mutations in the genes of organisms are what is responsible for evolution; in most cases, mutations are harmful, but in some cases, they lead to improvement, leading to new and more complex organisms, ultimately culminating in human beings. So what kind of random mutations (changes) lead to improved mechanisms? None, really. Mechanisms change over time with random events, but these events lead to degradation of mechanisms, not improvement. Rust sets in, different parts break, electric connections fail, lubricating fluids leak. If you leave a set of carefully-preserved World War One biplanes out in a field, without human intervention, they will not eventually evolve into jet planes and rocket ships. They will just break down. Likewise, electric toasters will not evolve into supercomputers, no matter how many millions of years you wait. Of course, organisms also degrade and die, but they have the power of reproduction, which continues the population and creates opportunities for improvement.

There is one hypothetical mechanism that, if constructed, could mimic actual organisms: a self-replicating machine. Such a machine could conceivably contain plans within itself to gather materials and energy from its environment and use these materials and energy to construct copies of itself, growing exponentially in numbers as more and more machines reproduce themselves. Such machines could even be programmed to “mutate,” creating variations in its descendants. However, no such mechanism has yet been produced. Meanwhile, primitive single-celled life forms on earth have been successfully reproducing for four billion years.

Now, let’s compare mechanisms to life forms. What are the characteristics of life?

  1. Adaptability/Flexibility. The story of life on earth is a story of adaptability and flexibility. The earliest life forms, single cells, apparently arose in hydrothermal vents deep in the ocean. Later, some of these early forms evolved into multi-cellular creatures, which spread throughout the oceans. After 3.5 billion years, fish emerged, and then much later, the first land creatures. Over time, life adapted to different environments: sea, land, rivers, caves, air; and also to different climates, from the steamiest jungles to frozen environments. 
  2. Creativity/Diversification. Life is not only adaptive, it is highly creative and branches into the most diverse forms over time. Today, there are millions of species. Even in the deepest parts of the ocean, life forms thrive in an environment with pressures that would crush most life forms. There are bacteria that can live in water at or near the boiling point. The tardigrade can survive the cold, hostile vacuum of space. The bacteria Deinococcus radiodurans is able to survive extreme forms of radiation by means of one of the most efficient DNA repair capabilities ever seen. Now it’s true that among actual mechanisms there is also a great variety; but these mechanisms are not self-created, they are created by humans and retain their forms unless specifically modified by humans.
  3. Drives toward cooperation / symbiosis. Traditional Darwinist views of evolution see life as competition and “survival of the fittest.” However, more recent theorists of evolution point to the strong role of cooperation in the emergence and survival of advanced life forms. Biologist Lynn Margulis has argued that the most fundamental building block of advanced organisms, the cell, was the result of a merger between more primitive bacteria billions of years ago. By merging, each bacterium lent a particular biological advantage to the other, and created a more advanced life form. This theory was regarded with much skepticism at the time it was proposed, but over time it became widely accepted.  Today, only about half of the human body is made up of human cells — the other half consists of trillions of microbes and quadrillions of viruses that largely live in harmony with human cells. Contrary to the popular view that microbes and viruses are threats to human beings, most of these microbes and viruses are harmless or even beneficial to humans. Microbes are essential in digesting food and synthesizing vitamins, and even the human immune system is partly built and partly operated by microbes!  By contrast, the parts of a mechanism don’t naturally come together to form the mechanism; they are forced together by their manufacturer.
  4. Growth. Life is characterized by growth. All life forms begin with either a single cell, or the merger of two cells, after which a process of repeated division begins. In multicellular organisms, the initial cell eventually becomes an embryo; and when that embryo is born, becoming an independent life form, it continues to grow. In some species, that life form develops into an animal that can weigh hundreds or even thousands of pounds. This, from a microscopic cell! No existing mechanism is capable of that kind of growth.
  5. Reproduction. Mechanisms eventually disintegrate, and life forms die. But life forms have the capability of reproducing and making copies of themselves, carrying on the line. In an environment with adequate natural resources, the number of life forms can grow exponentially. Mechanisms have not mastered that trick.
  6. Free will/choice. Mechanisms are either under direct human control, are programmed to do certain things, or perform in a regular pattern, such as a clock. Life forms, in their natural settings, are free and have their own purposes. There are some regular patterns — sleep cycles, mating seasons, winter migration. But the day-to-day movements and activities of life forms are largely unpredictable. They make spur-of-the-moment decisions on where to search for food, where to find shelter, whether to fight or flee from predators, and which mate is most acceptable. In fact, the issue of mate choice is one of the most intriguing illustrations of free will in life forms — there is evidence that species may select mates for beauty over actual fitness, and human egg cells even play a role in selecting which sperm cells will be allowed to penetrate them.
  7. Able to gather energy from its environment. Mechanisms require energy to work, and they acquire such energy from wound springs or weights (in clocks), electrical outlets, batteries, or fuel. These sources of energy are provided by humans in one way or another. But life forms are forced to acquire energy on their own, and even the most primitive life forms mastered this feat billions of years ago. Plants get their energy from the sun, and animals get their energy from plants or other animals. It’s true that some mechanisms, such as space probes, can operate on their own for many years while drawing energy from solar panels. But these panels were invented and produced by humans, not by mechanisms.
  8. Self-organizing. Mechanisms are built, but life forms are self-organizing. Small components join other small components, forming a larger organization; this larger organization gathers together more components. There is a gradual growth and differentiation of functions — digestion, breathing, brain and nervous system, mobility, immune function. Now this process is very, very slow: evolution takes place over hundreds of millions of years. But mechanisms are not capable of self-organization. 
  9. Capacity for healing and self-repair. When mechanisms are broken, or not working at full potential, a human being intervenes to fix the mechanism. When organisms are injured or infected, they can self-repair by initiating multiple processes, either simultaneously or in stages: immune cells fight invaders; blood cells clot in open wounds to stop bleeding; dead tissues and cells are removed by other cells; and growth hormones are released to begin the process of building new tissue. As healing nears completion, cells originally sent to repair the wound are removed or modified. Now self-repair is not always adequate, and organisms die all the time from injury or infection. But they would die much sooner, and probably a species would not persist at all, without the means of self-repair. Even the existing medications and surgery that modern science has developed largely work with and supplement the body’s healing capacities — after all, surgery would be unlikely to work in most cases without the body’s means of self-repair after the surgeon completes cutting and sewing.

______________________

 

The mechanism metaphor served a very useful purpose in the history of science, by spurring humanity to uncover the cause-and-effect patterns responsible for the motions of stars and planets and the biological functions of life. We can now send spacecraft to planets; we can create new chemicals to improve our lives; we now know that illness is the result of a breakdown in the relationship between the parts of a living organism; and we are getting better and better in figuring out which human parts need medication or repair, so that lifespans and general health can be extended.

But if we are seeking the broadest possible understanding of what life is, and not just the biological functions of life, we must abandon the mechanism metaphor as inadequate and even deceptive. I believe the mechanism metaphor misses several major characteristics of life:

  1. Change. Whether it is growth, reproduction, adaptation, diversification, or self-repair, life is characterized by change, by plasticity, flexibility, and malleability. 
  2. Self-Driven Progress. There is clearly an overall improvement in life forms over time. Changes in species may take place over millions or billions of years, but even so, the differences between a single-celled animal and contemporary multicellular creatures are astonishingly large. It is not just a question of “complexity,” but of capability. Mammals, reptiles, and birds have senses, mobility, and intelligence that single-celled creatures do not have.
  3. Autonomy and freedom. Although some scientists are inclined to think of living creatures, including humans, as “gene machines,” life forms can’t be easily analogized to pre-programmed machines. Certainly, life forms have goals that they pursue — but the pursuit of these goals in an often hostile environment requires numerous spur-of-the-moment decisions that do not lead to the predictable outcomes we expect of mechanisms.

Robert Pirsig, author of Zen and the Art of Motorcycle Maintenance, argues in Lila that the fundamental nature of life is its ability to move away from mechanistic patterns, and science has overlooked this fact because scientists consider it their job to look for mechanisms:

Mechanisms are the enemy of life. The more static and unyielding the mechanisms are, the more life works to evade them or overcome them. The law of gravity, for example, is perhaps the most ruthlessly static pattern of order in the universe. So, correspondingly, there is no single living thing that does not thumb its nose at that law day in and day out. One could almost define life as the organized disobedience of the law of gravity. One could show that the degree to which an organism disobeys this law is a measure of its degree of evolution. Thus, while the simple protozoa just barely get around on their cilia, earthworms manage to control their distance and direction, birds fly into the sky, and man goes all the way to the moon. . . .  This would explain why patterns of life [in evolution] do not change solely in accord with causative ‘mechanisms’ or ‘programs’ or blind operations of physical laws. They do not just change valuelessly. They change in ways that evade, override and circumvent these laws. The patterns of life are constantly evolving in response to something ‘better’ than that which these laws have to offer. (Lila, 1991 hardcover edition, p. 143)

But if the “mechanism” metaphor is inadequate, what are some alternative conceptualizations and metaphors that can retain the previous advances of science while deepening our understanding and helping us make new discoveries? I will discuss this issue in the next post.

Next: Beyond the “Mechanism” Metaphor in Biology

 

What Does Science Explain? Part 4 – The Ends of the Universe

Continuing my series of posts on “What Does Science Explain?” (parts 1, 2 , and 3 here), I wish today to discuss the role of teleological causation. Aristotle referred to teleology in his discussion of four causes as “final causation,” because it referred to the goals or ends of all things (the Greek word “telos” meaning “goal,” “purpose,” or “end.”) From a teleological viewpoint, an acorn grows into an oak tree, a bird takes flight, and a sculptor creates statues because these are the inherent and intended ends of the acorn, bird, and sculptor. Medieval metaphysics granted a large role for teleological causation in its view of the universe.

According to E.A. Burtt in The Metaphysics of Modern Science, the growth of modern science changed the idea of causation, focusing almost exclusively on efficient causation (objects impacting or affecting other objects). The idea of final (goal-oriented) causation was dismissed. And even though the early modern scientists such as Galileo and Newton believed in God, their notion of God was significantly different from the traditional medieval conception of God. Rather than seeing God as the Supreme Good, which continually draws all things to higher levels of being, early modern scientists reduced God to the First Efficient Cause, who merely started the mechanism of the universe and then let it run.

It was not unreasonable for early scientists to focus on efficient causation rather than final causation. It was often difficult to come up with testable hypotheses and workable predictive models by assuming long-term goals in nature. There was always a strong element of mystery about what the true ends of nature were and it was very difficult to pin down these alleged goals. Descartes believed in God, but also wrote that it was impossible to know what God’s goals were. For that reason, it is quite likely that science in its early stages needed to overcome medieval metaphysics in order to make its first great discoveries about nature. Focusing on efficient causation was simpler and apt to bring quicker results.

However, now that science has advanced over the centuries, it is worth revisiting the notion of teleological causation as a means of filling in gaps in our current understanding of nature. It is true that the concept of long-term goals for physical objects and forces often does not help very much in terms of developing useful, short-term predictive models. But final causation can help make sense of long-term patterns which may not be apparent when making observations over short periods of time. Processes that look purposeless and random in the short-term may actually be purposive in the long-term. We know that an acorn under the right conditions will eventually become an oak tree, because the process and the outcome of development can be observed within a reasonable period of time and that knowledge has been passed on to us. If our knowledge base began at zero and we came across an acorn for the first time, we would find it extremely difficult to predict the long-term future of that acorn merely by cutting it up and examining it under a microscope.

So, does the universe have long-term, goal-oriented patterns that may be hidden among the short-term realities of contingency and randomness? A number of physicists began to speculate that this was the case in the late twentieth century, when their research indicated that the physical forces and constants of the universe can exist in only a very narrow range of possibilities in order for life to be possible, or even for the universe to exist. Change in even one of the forces or constants could make life impossible or cause the universe to self-destruct in a short period of time. In this view, the evolution of the universe and of life on earth has been subject to a great deal of randomness, but the cosmic structure and conditions that made evolution possible are not at all random. As the physicist Freeman Dyson has noted:

It is true that we emerged in the universe by chance, but the idea of chance is itself only a cover for our ignorance. . . . The more I examine the universe and study the details of its architecture, the more evidence I find that the universe in some sense must have known that we were coming. (Disturbing the Universe, p. 250)

In what way did the universe “know we were coming?” Consider the fact that in the early universe after the Big Bang, the only elements that existed were the “light” elements hydrogen and helium, along with trace amounts of lithium and beryllium. A universe with only four elements would certainly be simple, but there would not be much to build upon. Life, at least as we know it, requires not just hydrogen but at a minimum carbon, oxygen, nitrogen, phosphorus, and sulfur. How did these and other heavier elements come into being? Stars produced them, through the process of fusion. In fact, stars have been referred to as the “factories” of heavy elements. Human beings today consist primarily of oxygen, followed by carbon, hydrogen, nitrogen, calcium, and phosphorous. Additional elements compose less than one percent of the human body, but even most of these elements are essential to human life. Without the elements produced earlier by stars we would not be here. It has been aptly said that human beings are made of “stardust.”

So why did stars create the heavier elements? After all, the universe could have gotten along quite well without additional elements. Was it random chance that created the heavy elements? Not really. Random chance plays a role in many natural events, but the creation of heavy elements in stars requires some precise conditions — it is not just a churning jumble of subatomic particles. The astronomer Fred Hoyle was the first scientist to study how stars made heavy elements, and he noted that the creation of heavy elements required very specific values in order for the process to work. When he concluded his research Hoyle remarked, “A common sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature. The numbers one calculates from the facts seem to me so overwhelming as to put this conclusion almost beyond question.”

The creation of heavier elements by the stars does not necessarily mean that the universe intended specifically to create human beings, but it does seem to indicate that the universe somehow “knew” that heavy elements would be required to create higher forms of being, above and beyond the simple and primitive elements created by the Big Bang. In that sense, creating life is plausibly a long-term goal of the universe.

And what about life itself? Does it make sense to use teleology to study the behavior of life forms? Biologist Peter Corning has argued that while science has long pursued reductionist explanations of phenomena, it is impossible to really know biological systems without pursuing holistic explanations centered on the purposive behavior of organisms.

According to reductionism, all things can be explained by the parts that they are made of — human beings are made of tissues and organs, which are made of cells, which are made of chemical compounds, which are made of atoms, which are made of subatomic particles. In the view of many scientists, everything about human beings can in principle be explained by actions at the subatomic level. Peter Corning, however, argues that this conception is mistaken. Reductionism is necessary for partially explaining biological systems, but it is not sufficient. The reason for this is that the wholes are greater than the parts, and the behavior of wholes often has characteristics that are radically different from the parts that they are made of. For example, it would be dangerous to add pure hydrogen or oxygen to a fire, but when hydrogen atoms and oxygen atoms are combined in the right way — as H2O — one obtains a chemical compound that is quite useful for extinguishing fires. The characteristics of the molecule are different from the characteristics of the atoms in it. Likewise, at the subatomic level, particles may have no definite position in space and can even be said to exist in multiple places at once; but human beings only exist in one place at a time, despite the fact that human beings are made of subatomic particles. The behavior of the whole is different from the behavior of the parts. The transformation of properties that occurs when parts form new wholes is known as “emergence.”

Corning notes that when one incorporates analysis of wholes into theoretical explanation, there is goal-oriented “downward causation” as well as “upward causation.” For example, a bird seeks the goal of food and a favorable environment, so when it begins to get cold, that bird flies thousands of miles to a warmer location for the winter. The atoms that make up that bird obviously go along for the ride, but a scientist can’t use the properties of the atoms to predict the flight of these atoms; only by looking at the properties of the bird as a whole can a scientist predict what the atoms making up the bird are going to do. The bird as a whole doesn’t have complete control over the atoms composing its body, but it clearly has some control. Causation goes down as well as up. Likewise, neuropsychologist Roger Sperry has argued that human consciousness is a whole that influences the parts of the brain and body just as the parts of the brain and body influence the consciousness: “[W]e contend that conscious or mental phenomena are dynamic, emergent, pattern (or configurational) properties of the living brain in action . . . these emergent pattern properties in the brain have causal control potency. . . ” (“Mind, Brain, and Humanist Values,” Bulletin of the Atomic Scientists, Sept 1966) In Sperry’s view, the values created by the human mind influence human behavior as much as the atoms and chemicals in the human body and brain.

Science has traditionally viewed the evolution of the universe as upward causation only, with smaller parts joining into larger wholes as a result of the laws of nature and random chance. This view of causation is illustrated in the following diagram:

reductionism

But if we take seriously the notion of emergence and purposive action, we have a more complex picture, in which the laws of nature and random chance constrain purposive action and life forms, but do not entirely determine the actions of life forms — i.e., there is both upward and downward causation:

reductionism_and_holism

It is important to note that this new view of causation does not eliminate the laws of nature — it just sets limits on what the laws of nature can explain. Specifically, the laws of nature have their greatest predictive power when we are dealing with the simplest physical phenomena; the complex wholes that are formed by the evolutionary process are less predictable because they can to some extent work around the laws of nature by employing the new properties that emerge from the joining of parts. For example, it is relatively easy to predict the motion of objects in the solar system by using the laws of nature; it is not so easy to predict the motion of life forms because life forms have properties that go beyond the simple properties possessed by objects in the solar system. As Robert Pirsig notes in Lila, life can practically be defined by its ability to transcend or work around the static patterns of the laws of nature:

The law of gravity . . . is perhaps the most ruthlessly static pattern of order in the universe. So, correspondingly, there is no single living thing that does not thumb its nose at that law day in and day out. One could almost define life as the organized disobedience of the law of gravity. One could show that the degree to which an organism disobeys this law is a measure of its degree of evolution. Thus, while the single protozoa just barely get around on their cilia, earthworms manage to control their distance and direction, birds fly into the sky, and man goes all the way to the moon. (Lila (1991), p. 143.

Many scientists still resist the notion of teleological causation. But it could be argued that even scientists who vigorously deny that there is any purpose in the universe actually have an implicit teleology. Their teleology is simply the “laws of nature” themselves, and either the inner goal of all things is to follow those laws, or it is the goal of the laws to compel all things to follow their commands. Other implicit teleologies can be found in scientists’ assumptions that nature is inherently simple; that mathematics is the language of nature; or that all the particles and forces in the nature play some necessary role. According to physicist Paul Davies,

There is . . . an unstated but more or less universal feeling among physicists that everything that exists in nature must have a ‘place’ or a role as part of some wider scheme, that nature should not indulge in profligacy by manifesting gratuitous entities, that nature should not be arbitrary. Each facet of physical reality should link in with the others in a ‘natural’ and logical way. Thus, when the particle known as the muon was discovered in 1937, the physicist Isidor Rabi was astonished. ‘Who ordered that?’ he exclaimed. (Paul Davies, The Mind of God: The Scientific Basis for a Rational World, pp. 209-10.

Ultimately, however, one cannot fully discuss the goals or ends of the universe without exploring the notion of Ideal Forms — that is, a blueprint for all things to follow or aspire to. The subject of Ideal Forms will be discussed in my next post.

What Does Science Explain? Part 3 – The Mythos of Objectivity

In parts one and two of my series “What Does Science Explain?,” I contrasted the metaphysics of the medieval world with the metaphysics of modern science. The metaphysics of modern science, developed by Kepler, Galileo, Descartes, and Newton, asserted that the only true reality was mathematics and the shape, motion, and solidity of objects, all else being subjective sensations existing solely within the human mind. I pointed out that the new scientific view was valuable in developing excellent predictive models, but that scientists made a mistake in elevating a method into a metaphysics, and that the limitations of the metaphysics of modern science called for a rethinking of the modern scientific worldview. (See The Metaphysical Foundations of Modern Science by Edwin Arthur Burtt.)

Early scientists rejected the medieval worldview that saw human beings as the center and summit of creation, and this rejection was correct with regard to astronomical observations of the position and movement of the earth. But the complete rejection of medieval metaphysics with regard to the role of humanity in the universe led to a strange division between theory and practice in science that endures to this day. The value and prestige of science rests in good part on its technological achievements in improving human life. But technology has a two-sided nature, a destructive side as well as a creative side. Aspects of this destructive side include automatic weaponry, missiles, conventional explosives, nuclear weapons, biological weapons, dangerous methods of climate engineering, perhaps even a threat from artificial intelligence. Even granting the necessity of the tools of violence for deterrence and self-defense, there remains the question of whether this destructive technology is going too far and slipping out of our control. So far the benefits of good technology have outweighed the hazards of destructive technology, but what research guidance is offered to scientists when human beings are removed from their high place in the universe and human values are separated from the “real” world of impersonal objects?

Consider the following question: Why do medical scientists focus their research on the treatment and cure of illness in humans rather than the treatment and cure of illness in cockroaches or lizards? This may seem like a silly question, but there’s no purely objective, scientific reason to prefer one course of research over another; the metaphysics of modern science has already disregarded the medieval view that humans have a privileged status in the universe. One could respond by arguing that human beings have a common self-interest in advancing human health through medical research, and this self-interest is enough. But what is the scientific justification for the pursuit of self-interest, which is not objective anyway? Without a recognition of the superior value of human life, medical science has no research guidance.

Or consider this: right now, astronomers are developing and employing advanced technologies to detect other worlds in the galaxy that may have life. The question of life on other planets has long interested astronomers, but it was impossible with older technologies to adequately search for life. It would be safe to say that the discovery of life on another planet would be a landmark development in science, and the discovery of intelligent life on another planet would be an astonishing development. The first scientist who discovered a world with intelligent life would surely win awards and fame. And yet, we already have intelligent life on earth and the metaphysics of modern science devalues it. In practice, of course, most scientists do value human life; the point is, the metaphysics behind science doesn’t, leaving scientists at a loss for providing an intellectual justification for a research program that protects and advances human life.

A second limitation of modern science’s metaphysics, closely related to the first, is its disregard of certain human sensations in acquiring knowledge. Early scientists promoted the view that only the “primary qualities” of mathematics, shape, size, and motion were real, while the “secondary qualities” of color, taste, smell, and sound existed only in the mind. This distinction between primary and secondary qualities was criticized at the time by philosophers such as George Berkeley, a bishop of the Anglican Church. Berkeley argued that the distinction between primary and secondary qualities was false and that even size, shape, and motion were relative to the perceptions and judgment of observers. Berkeley also opposed Isaac Newton’s theory that space and time were absolute entities, arguing instead that these were ideas rooted in human sensations. But Berkeley was disregarded by scientists, largely because Newton offered predictive models of great value.

Three hundred years later, Isaac Newton’s models retain their great value and are still widely used — but it is worth noting that Berkeley’s metaphysics has actually proved superior in many respects to Newton’s metaphysics.

Consider the nature of mathematics. For many centuries mathematicians believed that mathematical objects were objectively real and certain and that Euclidean geometry was the one true geometry. However, the discovery of non-Euclidean geometries in the nineteenth century shook this assumption, and mathematicians had to reconcile themselves to the fact that it was possible to create multiple geometries of equal validity. There were differences between the geometries in terms of their simplicity and their ability to solve particular problems, but no one geometry was more “real” than the others.

If you think about it, this should not be surprising. The basic objects of geometry — points, lines, and planes — aren’t floating around in space waiting for you to take note of them. They are concepts, creations of the human brain. We may see particular objects that resemble points, lines, and planes, but space itself has no visible content; we have to add content to it.  And we have a choice in what content to use. It is possible to create a geometry in which all lines are straight or all lines are curved; in which some lines are parallel or no lines are parallel;  or in which lines are parallel over a finite distance but eventually meet at some infinitely great distance. It is also possible to create a geometry with axioms that assume no lines, only points; or a geometry that assumes “regions” rather than points. So the notion that mathematics is a “primary quality” that exists within objects independent of human minds is a myth. (For more on the imaginary qualities of mathematics, see my previous posts here and here.)

But aside from the discovery of multiple mathematical systems, what has really killed the artificial distinction between “primary qualities,” allegedly objective, and “secondary qualities,” allegedly subjective, is modern science itself, particularly in the findings of relativity theory and quantum mechanics.

According to relativity theory, there is no single, objectively real size, shape, or motion of objects — these qualities are all relative to an observer in a particular reference frame (say, at the same location on earth, in the same vehicle, or in the same rocket ship). Contrary to some excessive and simplistic views, relativity theory does NOT mean that any and all opinions are equally valid. In fact, all observers within the same reference frame should be seeing the same thing and their measurements should match. But observers in different reference frames may have radically different measurements of the size, shape, and motion of an object, and there is no one single reference frame that is privileged — they are all equally valid.

Consider the question of motion. How fast are you moving right now? Relative to your computer or chair, you are probably still. But the earth is rotating at 1040 miles per hour, so relative to an observer on the moon, you would be moving at that speed — adjusting for the fact that the moon is also orbiting around the earth at 2288 miles per hour. But also note that the earth is orbiting the sun at 66,000 miles per hour, our solar system is orbiting the galaxy at 52,000 miles per hour, and our galaxy is moving at 1,200,000 miles per hour; so from the standpoint of an observer in another galaxy you are moving at a fantastically fast speed in a series of crazy looping motions. Isaac Newton argued that there was an absolute position in space by which your true, objective speed could be measured. But Einstein dismissed that view, and the scientific consensus today is that Einstein was right — the answer to the question of how fast you are moving is relative to the location and speed of the observer.

The relativity of motion was anticipated by the aforementioned George Berkeley as early as the eighteenth century, in his Treatise Concerning the Principles of Human Knowledge (paragraphs 112-16). Berkeley’s work was subsequently read by the physicist Ernest Mach, who subsequently influenced Einstein.

Relativity theory also tells us that there is no absolute size and shape, that these also vary according to the frame of reference of an observer in relation to what is observed. An object moving at very fast speeds relative to an observer will be shortened in length, which also affects its shape. (See the examples here and here.) What is the “real” size and shape of the object? There is none — you have to specify the reference frame in order to get an answer. Professor Richard Wolfson, a physicist at Middlebury College who has a great lecture series on relativity theory, explains what happens at very fast speeds:

An example in which length contraction is important is the Stanford Linear Accelerator, which is 2 miles long as measured on Earth, but only about 3 feet long to the electrons moving down the accelerator at 0.9999995c [nearly the speed of light]. . . . [Is] the length of the Stanford Linear Accelerator ‘really’ 2 miles? No! To claim so is to give special status to one frame of reference, and that is precisely what relativity precludes. (Course Guidebook to Einstein’s Relativity and the Quantum Revolution, Lecture 10.)

In fact, from the perspective of a light particle (a photon), there is infinite length contraction — there is no distance and the entire universe looks like a point!

The final nail in the coffin of the metaphysics of modern science is surely the weird world of quantum physics. According to quantum physics, particles at the subatomic level do not occupy only one position at a particular moment of time but can exist in multiple positions at the same time — only when the subatomic particles are observed do the various possibilities “collapse” into a single outcome. This oddity led to the paradoxical thought experiment known as “Schrodinger’s Cat” (video here). The importance of the “observer effect” to modern physics is so great that some physicists, such as the late physicist John Wheeler, believed that human observation actually plays a role in shaping the very reality of the universe! Stephen Hawking holds a similar view, arguing that our observation “collapses” multiple possibilities into a single history of the universe: “We create history by our observation, rather than history creating us.” (See The Grand Design, pp. 82-83, 139-41.) There are serious disputes among scientists about whether uncertainties at the subatomic level really justify the multiverse theories of Wheeler and Hawking, but that is another story.

Nevertheless, despite the obsolescence of the metaphysical premises of modern science, when scientists talk about the methods of science, they still distinguish between the reality of objects and the unreality of what exists in the mind, and emphasize the importance of being objective at all times. Why is that? Why do scientists still use a metaphysics developed centuries ago by Kepler, Galileo, and Newton? I think this practice persists largely because the growth of knowledge since these early thinkers has led to overspecialization — if one is interested in science, one pursues a degree in chemistry, biology, or physics; if one is interested in metaphysics, one pursues a degree in philosophy. Scientists generally aren’t interested in or can’t understand what philosophers have to say, and philosophers have the same view of scientists. So science carries on with a metaphysics that is hundreds of years old and obsolete.

It’s true that the idea of objectivity was developed in response to the very real problem of the uncertainty of human sense impressions and the fallibility of the conclusions our minds draw in response to those sense impressions. Sometimes we think we see something, but we don’t. People make mistakes, they may see mirages; in extreme cases, they may hallucinate. Or we see the same thing but have different interpretations. Early scientists tried to solve this problem by separating human senses and the human mind from the “real” world of objects. But this view was philosophically dubious to begin with and has been refuted by science itself. So how do we resolve the problem of mistaken and differing perceptions and interpretations?

Well, we supplement our limited senses and minds with the senses and minds of other human beings. We gather together, we learn what others have perceived and concluded, we engage in dialogue and debate, we conduct repeated observations and check our results with the results of others. If we come to an agreement, then we have a tentative conclusion; if we don’t agree, more observation, testing, and dialogue is required to develop a picture that resolves the competing claims. In some cases we may simply end up with an explanation that accounts for why we come up with different conclusions — perhaps we are in different locations, moving at different speeds, or there is something about our sensory apparatus that causes us to sense differently. (There is an extensive literature in science about why people see colors differently due to the nature of the eye and brain.)

Central to the whole process of science is a common effort — but there is also the necessity of subduing one’s ego, acknowledging that not only are there other people smarter than we are, but that the collective efforts of even less-smart people are greater than our own individual efforts. Subduing one’s ego is also required in order to prepare for the necessity of changing one’s mind in response to new evidence and arguments. Ultimately, the search for knowledge is a social and moral enterprise. But we are not going to succeed in that endeavor by positing a reality separate from human beings and composed only of objects. (Next: Part 4)

What Does Science Explain? Part 1 – What is Causation?

In previous posts, I have argued that science has been excellent at creating predictive models of natural phenomena. From the origins of the universe, to the evolution of life, to chemical reactions, and the building of technological devices, scientists have learned to predict causal sequences and manipulate these causal sequences for the benefit (or occasionally, detriment) of humankind. These models have been stupendous achievements of civilization, and religious texts and institutions simply cannot compete in terms of offering predictive models.

There remains the issue, however, of whether the predictive models of science really explain all that there is to explain. While many are inclined to believe that the models of science explain everything, or at least everything that one needs to know, there are actually some serious disputes even among scientists about what causation is, what a valid explanation is, whether predictive models need to be realistic, and how real are some of the entities scientists study, such as the “laws of nature” and the mathematics that are often part of those laws.

The fundamental issues of causation, explanation, and reality are discussed in detail in a book published in 1954 entitled: The Metaphysical Foundations of Modern Science by Edwin Arthur Burtt. According to Burtt, the birth and growth of modern science came with the development of a new metaphysics, that is, the study of being and existence. Copernicus, Kepler, Galileo, and Newton all played a role in creating this new metaphysics, and it shapes how we view the world to this day.

In order to understand Burtt’s thesis, we need to back up a bit and briefly discuss the state of metaphysics before modern science — that is, medieval metaphysics. The medieval view of the world in the West was based largely on Christianity and the ancient Greek philosophers such as Aristotle, who wrote treatises on both physics and metaphysics.

Aristotle wrote that there were four types of answers to the question “why?” These answers were described by Aristotle as the “four causes,” though it has been argued that the correct translation of the Greek word that Aristotle used is “explanation” rather than “cause.” These are:

(1) Material cause

(2) Formal cause

(3) Efficient (or moving) cause

(4) Final cause

“Material cause” refers to changes that take place as a result of the material that something is made of. If a substance melts at a particular temperature, one can argue that it is the material nature of that substance that causes it to melt at that temperature. (The problem with this kind of explanation is that it is not very deep — one can then ask why a material behaves as it does.)

“Formal cause” refers to the changes that take place in matter because of the form that an object is destined to have. According to Aristotle, all objects share the same matter — it is the arrangement of matter into their proper forms that causes matter to become a rock, a tree, a bird, or a human being. Objects and living things eventually disintegrate and perish, but the forms are eternal, and they shape matter into new objects and living things that replace the old. The idea of formal causation is rooted in Plato’s theory of forms, though Aristotle modified Plato’s theory in a number of ways.

“Efficient cause” refers to the change that takes place when one object impacts another; one object or event is the cause, the other is the effect. A stick hitting a ball, a saw cutting wood, and hydrogen atoms interacting with oxygen atoms to create water are all examples of efficient causes.

“Final cause” refers to the goal, end, or purpose of a thing — the Greek word for goal is “telos.” An acorn grows into an oak tree because that is the goal or telos of an acorn. Likewise, a fertilized human ovum becomes a human being. In nature, birds fly, rain nourishes plants, and the moon orbits the earth, because nature has intended certain ends for certain things. The concept of a “final cause” is intimately related to the “formal cause,” in the sense that the forms tend to provide the ends that matter pursues.

Related to these four causes or explanations is Aristotle’s notion of potentiality and actuality. Before things come into existence, one can say that there is potential; when these things come into existence they are actualized. Hydrogen atoms and oxygen atoms have the potential to become water if they are joined in the right way, but until they are so joined, there is only potential water, not actual water. A block of marble has the potential to become a statue, but it is not actually a statue until a sculptor completes his or her work. A human being is potentially wise if he or she pursues knowledge, but until that pursuit of knowledge is carried out, there is only potentiality and not actuality. The forms and telos of nature are primarily responsible for the transformation of potentiality into actuality.

Two other aspects of the medieval view of metaphysics are worth noting. First, for the medievals, human beings were the center of the universe, the highest end of nature. Stars, planets, trees, animals, chemicals, were lower forms of being than humans and existed for the benefit of humans. Second, God was not merely the first cause of the universe — God was the Supreme Good, the goal or telos to which all creation was drawn in pursuit of its final goals and perfection. According to Burtt,

When medieval philosophers thought of what we call the temporal process it was this continuous transformation of potentiality into actuality that they had in mind. . . . God was the One who eternally exists, and ever draws into movement by his perfect beauty all that is potentially the bearer of a higher existence. He is the divine harmony of all goods, conceived as now realized in ideal activity, eternally present, himself unmoved, yet the mover of all change. (Burtt, The Metaphysical Foundations of Modern Science, pp. 94-5)

The rise of modern science, according to Burtt, led to a radical change in humanity’s metaphysical views. A great deal of this change was beneficial, in the sense that it led to predictive models that successfully answered certain questions about natural processes that were previously mysterious. However, as Burtt noted, the new metaphysics of science was also a straitjacket that constricted humanity’s pursuit of knowledge. Some human senses were unjustifiably dismissed as unreliable or deceptive and some types of causation were swept away unnecessarily. How modern science created a new metaphysics that changed humanity’s conception of reality will be discussed in part two.

The Use of Fiction and Falsehood in Science

Astrophysicist Neil deGrasse Tyson has some interesting and provocative things to say about religion in a recent interview. I tend to agree with Tyson that religions have a number of odd or even absurd beliefs that are contrary to science and reason. One statement by Tyson, however, struck me as inaccurate. According to Tyson, “[T]here are religions and belief systems, and objective truths. And if we’re going to govern a country, we need to base that governance on objective truths — not your personal belief system.” (The Daily Beast)

I have a great deal of respect for Tyson as a scientist, and Tyson clearly knows more about physics than I do. But I think his understanding of what scientific knowledge provides is naïve and unsupported by history and present day practice. The fact of the matter is that scientists also have belief systems, “mental models” of how the world works. These mental models are often excellent at making predictions, and may also be good for explanation. But the mental models of science may not be “objectively true” in representing reality.

The best mental models in science satisfy several criteria: they reliably predict natural phenomena; they cover a wide range of such phenomena (i.e., they cover much more than a handful of special cases); and they are relatively simple. Now it is not easy to create a mental model that satisfies these criteria, especially because there are tradeoffs between the different criteria. As a result, even the best scientists struggle for many years to create adequate models. But as descriptions of reality, the models, or components of the models, may be fictional or even false. Moreover, although we think that the models we have today are true, every good scientist knows that in the future our current models may be completely overturned by new models based on entirely new conceptions. Yet in many cases, scientists often respect or retain the older models because they are useful, even if the models’ match to reality is false!

Consider the differences between Isaac Newton’s conception of gravity and Albert Einstein’s conception of gravity. According to Newton, gravity is a force that attracts objects to each other. If you throw a ball on earth, the path of the ball eventually curves downward because of the gravitational attraction of the earth. In Newton’s view, planets orbit the sun because the force of gravity pulls planetary bodies away from the straight line paths that they would normally follow as a result of inertia: hence, planets move in circular orbits. But according to Einstein, gravity is not a force — gravity seems like it’s a force, but it’s actually a “fictitious force.” In Einstein’s view, objects seem to attract each other because mass warps or curves spacetime, and objects tend to follow the paths made by curved spacetime. Newton and Einstein agree that inertia causes objects in motion to continue in straight lines unless they are acted on by a force; but in Einstein’s view, planets orbit the sun because they are actually already travelling straight paths, only in curved spacetime! (Yes this makes sense — if you travel in a jet, your straightest possible path between two cities is actually curved, because the earth is round.)

Scientists agree that Einstein’s view of gravity is correct (for now). But they also continue to use Newtonian models all the time. Why? Because Newtonian models are much simpler than Einstein’s and scientists don’t want to work harder than they have to! Using Newtonian conceptions of gravity as a real force, scientists can still track the paths of objects and send satellites into orbit; Newton’s equations work perfectly fine as predictive models in most cases. It is only in extraordinary cases of very high gravity or very high speeds that scientists must abandon Newtonian models and use Einstein’s to get more accurate predictions. Otherwise scientists much prefer to assume gravity is a real force and use Newtonian models. Other fictitious forces that scientists calculate using Newton’s models are the Coriolis force and centrifugal force.

Even in cases where you might expect scientists to use Einstein’s conception of curved spacetime, there is not a consistent practice. Sometimes scientists assume that spacetime is curved, sometimes they assume spacetime is flat. According to theoretical physicist Kip Thorne, “It is extremely useful, in relativity research, to have both paradigms at one’s fingertips. Some problems are solved most easily and quickly using the curved spacetime paradigm; others, using flat spacetime. Black hole problems . . . are most amenable to curved spacetime techniques; gravitational-wave problems . . . are most amenable to flat spacetime techniques.” (Black Holes and Time Warps). Whatever method provides the best results is what matters, not so much whether spacetime is really curved or not.

The question of the reality of mental models in science is particularly acute with regard to mathematical models. For many years, mathematicians have been debating whether or not the objects of mathematics are real, and they have yet to arrive at a consensus. So, if an equation accurately predicts how natural phenomena behave, is it because the equation exists “out there” someplace? Or is it because the equation is just a really good mental model? Einstein himself argued that “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” By this, Einstein meant that it was possible to create perfectly certain mathematical models in the human mind; but that the matching of these models’ predictions to natural phenomenon required repeated observation and testing, and one could never be completely sure that one’s model was the final answer and therefore that it really objectively existed.

And even if mathematical models work perfectly in predicting the behavior of natural phenomena, there remains the question of whether the different components of the model really match to something in reality. As noted above, Newton’s model of gravity does a pretty good job of predicting motion — but the part of the model that describes gravity as a force is simply wrong. In mathematics, the set of numbers known as “imaginary numbers” are used by engineers for calculating electric current; they are used by 3D modelers; and they are used by physicists in quantum mechanics, among other applications. But that doesn’t necessarily mean that imaginary numbers exist or correspond to some real quantity — they are just useful components of an equation.

A great many scientists are quite upfront about the fact that their models may not be an accurate reflection of reality. In their view, the purpose of science is to predict the behavior of natural phenomena, and as long as science gets better and better at this, it is less important if models are proved to be a mismatch to reality. Brian Koberlein, an astrophysicist at the Rochester Institute of Technology, writes that scientific theories should be judged by the quality and quantity of their predictions, and that theories capable of making predictions can’t be proved wrong, only replaced by theories that are better at predicting. For example, he notes that the caloric theory of heat, which posited the existence of an invisible fluid within materials, was quite successful in predicting the behavior of heat in objects, and still is at present. Today, we don’t believe such a fluid exists, but we didn’t discard the theory until we came up with a new theory that could predict better. The caloric theory of heat wasn’t “proven wrong,” just replaced with something better. Koberlein also points to Newton’s conception of gravity, which is still used today because it is simpler than Einstein’s and “good enough” at predicting in most cases. Koberlein concludes that for these reasons, Einstein will “never” be wrong — we just may find a theory better at predicting.

Stephen Hawking has discussed the problem of truly knowing reality, and notes that it perfectly possible to have different theories with entirely different conceptual frameworks that work equally well at predicting the same phenomena. In a fanciful example, Hawking notes that goldfish living in a curved bowl will see straight-line movement outside the bowl as being curved, but despite this it would still be possible for goldfish to develop good predictive theories. He notes that likewise, human beings may also have a distorted picture of reality, but we are still capable of building good predictive models. Hawking calls his philosophy “model-dependent realism”:

According to model-dependent realism, it is pointless to ask whether a model is real, only whether it agrees with observation. If there are two models that both agree with observation, like the goldfish’s model and ours, then one cannot say that one is more real than the other. One can use whichever model is more convenient in the situation under consideration. (The Grand Design, p. 46)

So if science consists of belief systems/mental models, which may contain fictions or falsehoods, how exactly does science differ from religion?

Well for one thing, science far excels religion in providing good predictive models. If you want to know how the universe began, how life evolved on earth, how to launch a satellite into orbit, or how to build a computer, religious texts offer virtually nothing that can help you with these tasks. Neil deGrasse Tyson is absolutely correct about the failure of religion in this respect.  Traditional stories of the earth’s creations, as found in the Bible’s book of Genesis, were useful first attempts to understand our origins, but they have been long-eclipsed by contemporary scientific models, and there is no use denying this.

What religion does offer, and science does not, is a transcendent picture of how we ought to live our lives and an interpretation of life’s meaning according to this transcendent picture. The behavior of natural phenomena can be predicted to some extent by science, but human beings are free-willed. We can decide to love others or love ourselves above others. We can seek peace, or murder in the pursuit of power and profit. Whatever we decide to do, science can assist us in our actions, but it can’t provide guidance on what we ought to do. Religion provides that vision, and if these visions are imaginative, so are many aspects of scientific models. Einstein himself, while insisting that science was the pursuit of objective knowledge, also saw a role for religion in providing a transcendent vision:

[T]he scientific method can teach us nothing else beyond how facts are related to, and conditioned by, each other.The aspiration toward such objective knowledge belongs to the highest of which man is capabIe, and you will certainly not suspect me of wishing to belittle the achievements and the heroic efforts of man in this sphere. Yet it is equally clear that knowledge of what is does not open the door directly to what should be. . . . Objective knowledge provides us with powerful instruments for the achievements of certain ends, but the ultimate goal itself and the longing to reach it must come from another source. . . .

To make clear these fundamental ends and valuations, and to set them fast in the emotional life of the individual, seems to me precisely the most important function which religion has to perform in the social life of man.

Now fundamentalists and atheists might both agree that rejecting the truth of sacred scripture with regard to the big bang and evolution tends to undermine the transcendent visions of religion. But the fact of the matter is that scientists never reject a mental model simply because parts of the model may be fictional or false; if the model provides useful guidance, it is still a valid part of human knowledge.