What Does Science Explain? Part 4 – The Ends of the Universe

Continuing my series of posts on “What Does Science Explain?” (parts 1, 2 , and 3 here), I wish today to discuss the role of teleological causation. Aristotle referred to teleology in his discussion of four causes as “final causation,” because it referred to the goals or ends of all things (the Greek word “telos” meaning “goal,” “purpose,” or “end.”) From a teleological viewpoint, an acorn grows into an oak tree, a bird takes flight, and a sculptor creates statues because these are the inherent and intended ends of the acorn, bird, and sculptor. Medieval metaphysics granted a large role for teleological causation in its view of the universe.

According to E.A. Burtt in The Metaphysics of Modern Science, the growth of modern science changed the idea of causation, focusing almost exclusively on efficient causation (objects impacting or affecting other objects). The idea of final (goal-oriented) causation was dismissed. And even though the early modern scientists such as Galileo and Newton believed in God, their notion of God was significantly different from the traditional medieval conception of God. Rather than seeing God as the Supreme Good, which continually draws all things to higher levels of being, early modern scientists reduced God to the First Efficient Cause, who merely started the mechanism of the universe and then let it run.

It was not unreasonable for early scientists to focus on efficient causation rather than final causation. It was often difficult to come up with testable hypotheses and workable predictive models by assuming long-term goals in nature. There was always a strong element of mystery about what the true ends of nature were and it was very difficult to pin down these alleged goals. Descartes believed in God, but also wrote that it was impossible to know what God’s goals were. For that reason, it is quite likely that science in its early stages needed to overcome medieval metaphysics in order to make its first great discoveries about nature. Focusing on efficient causation was simpler and apt to bring quicker results.

However, now that science has advanced over the centuries, it is worth revisiting the notion of teleological causation as a means of filling in gaps in our current understanding of nature. It is true that the concept of long-term goals for physical objects and forces often does not help very much in terms of developing useful, short-term predictive models. But final causation can help make sense of long-term patterns which may not be apparent when making observations over short periods of time. Processes that look purposeless and random in the short-term may actually be purposive in the long-term. We know that an acorn under the right conditions will eventually become an oak tree, because the process and the outcome of development can be observed within a reasonable period of time and that knowledge has been passed on to us. If our knowledge base began at zero and we came across an acorn for the first time, we would find it extremely difficult to predict the long-term future of that acorn merely by cutting it up and examining it under a microscope.

So, does the universe have long-term, goal-oriented patterns that may be hidden among the short-term realities of contingency and randomness? A number of physicists began to speculate that this was the case in the late twentieth century, when their research indicated that the physical forces and constants of the universe can exist in only a very narrow range of possibilities in order for life to be possible, or even for the universe to exist. Change in even one of the forces or constants could make life impossible or cause the universe to self-destruct in a short period of time. In this view, the evolution of the universe and of life on earth has been subject to a great deal of randomness, but the cosmic structure and conditions that made evolution possible are not at all random. As the physicist Freeman Dyson has noted:

It is true that we emerged in the universe by chance, but the idea of chance is itself only a cover for our ignorance. . . . The more I examine the universe and study the details of its architecture, the more evidence I find that the universe in some sense must have known that we were coming. (Disturbing the Universe, p. 250)

In what way did the universe “know we were coming?” Consider the fact that in the early universe after the Big Bang, the only elements that existed were the “light” elements hydrogen and helium, along with trace amounts of lithium and beryllium. A universe with only four elements would certainly be simple, but there would not be much to build upon. Life, at least as we know it, requires not just hydrogen but at a minimum carbon, oxygen, nitrogen, phosphorus, and sulfur. How did these and other heavier elements come into being? Stars produced them, through the process of fusion. In fact, stars have been referred to as the “factories” of heavy elements. Human beings today consist primarily of oxygen, followed by carbon, hydrogen, nitrogen, calcium, and phosphorous. Additional elements compose less than one percent of the human body, but even most of these elements are essential to human life. Without the elements produced earlier by stars we would not be here. It has been aptly said that human beings are made of “stardust.”

So why did stars create the heavier elements? After all, the universe could have gotten along quite well without additional elements. Was it random chance that created the heavy elements? Not really. Random chance plays a role in many natural events, but the creation of heavy elements in stars requires some precise conditions — it is not just a churning jumble of subatomic particles. The astronomer Fred Hoyle was the first scientist to study how stars made heavy elements, and he noted that the creation of heavy elements required very specific values in order for the process to work. When he concluded his research Hoyle remarked, “A common sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature. The numbers one calculates from the facts seem to me so overwhelming as to put this conclusion almost beyond question.”

The creation of heavier elements by the stars does not necessarily mean that the universe intended specifically to create human beings, but it does seem to indicate that the universe somehow “knew” that heavy elements would be required to create higher forms of being, above and beyond the simple and primitive elements created by the Big Bang. In that sense, creating life is plausibly a long-term goal of the universe.

And what about life itself? Does it make sense to use teleology to study the behavior of life forms? Biologist Peter Corning has argued that while science has long pursued reductionist explanations of phenomena, it is impossible to really know biological systems without pursuing holistic explanations centered on the purposive behavior of organisms.

According to reductionism, all things can be explained by the parts that they are made of — human beings are made of tissues and organs, which are made of cells, which are made of chemical compounds, which are made of atoms, which are made of subatomic particles. In the view of many scientists, everything about human beings can in principle be explained by actions at the subatomic level. Peter Corning, however, argues that this conception is mistaken. Reductionism is necessary for partially explaining biological systems, but it is not sufficient. The reason for this is that the wholes are greater than the parts, and the behavior of wholes often has characteristics that are radically different from the parts that they are made of. For example, it would be dangerous to add pure hydrogen or oxygen to a fire, but when hydrogen atoms and oxygen atoms are combined in the right way — as H2O — one obtains a chemical compound that is quite useful for extinguishing fires. The characteristics of the molecule are different from the characteristics of the atoms in it. Likewise, at the subatomic level, particles may have no definite position in space and can even be said to exist in multiple places at once; but human beings only exist in one place at a time, despite the fact that human beings are made of subatomic particles. The behavior of the whole is different from the behavior of the parts. The transformation of properties that occurs when parts form new wholes is known as “emergence.”

Corning notes that when one incorporates analysis of wholes into theoretical explanation, there is goal-oriented “downward causation” as well as “upward causation.” For example, a bird seeks the goal of food and a favorable environment, so when it begins to get cold, that bird flies thousands of miles to a warmer location for the winter. The atoms that make up that bird obviously go along for the ride, but a scientist can’t use the properties of the atoms to predict the flight of these atoms; only by looking at the properties of the bird as a whole can a scientist predict what the atoms making up the bird are going to do. The bird as a whole doesn’t have complete control over the atoms composing its body, but it clearly has some control. Causation goes down as well as up. Likewise, neuropsychologist Roger Sperry has argued that human consciousness is a whole that influences the parts of the brain and body just as the parts of the brain and body influence the consciousness: “[W]e contend that conscious or mental phenomena are dynamic, emergent, pattern (or configurational) properties of the living brain in action . . . these emergent pattern properties in the brain have causal control potency. . . ” (“Mind, Brain, and Humanist Values,” Bulletin of the Atomic Scientists, Sept 1966) In Sperry’s view, the values created by the human mind influence human behavior as much as the atoms and chemicals in the human body and brain.

Science has traditionally viewed the evolution of the universe as upward causation only, with smaller parts joining into larger wholes as a result of the laws of nature and random chance. This view of causation is illustrated in the following diagram:


But if we take seriously the notion of emergence and purposive action, we have a more complex picture, in which the laws of nature and random chance constrain purposive action and life forms, but do not entirely determine the actions of life forms — i.e., there is both upward and downward causation:


It is important to note that this new view of causation does not eliminate the laws of nature — it just sets limits on what the laws of nature can explain. Specifically, the laws of nature have their greatest predictive power when we are dealing with the simplest physical phenomena; the complex wholes that are formed by the evolutionary process are less predictable because they can to some extent work around the laws of nature by employing the new properties that emerge from the joining of parts. For example, it is relatively easy to predict the motion of objects in the solar system by using the laws of nature; it is not so easy to predict the motion of life forms because life forms have properties that go beyond the simple properties possessed by objects in the solar system. As Robert Pirsig notes in Lila, life can practically be defined by its ability to transcend or work around the static patterns of the laws of nature:

The law of gravity . . . is perhaps the most ruthlessly static pattern of order in the universe. So, correspondingly, there is no single living thing that does not thumb its nose at that law day in and day out. One could almost define life as the organized disobedience of the law of gravity. One could show that the degree to which an organism disobeys this law is a measure of its degree of evolution. Thus, while the single protozoa just barely get around on their cilia, earthworms manage to control their distance and direction, birds fly into the sky, and man goes all the way to the moon. (Lila (1991), p. 143.

Many scientists still resist the notion of teleological causation. But it could be argued that even scientists who vigorously deny that there is any purpose in the universe actually have an implicit teleology. Their teleology is simply the “laws of nature” themselves, and either the inner goal of all things is to follow those laws, or it is the goal of the laws to compel all things to follow their commands. Other implicit teleologies can be found in scientists’ assumptions that nature is inherently simple; that mathematics is the language of nature; or that all the particles and forces in the nature play some necessary role. According to physicist Paul Davies,

There is . . . an unstated but more or less universal feeling among physicists that everything that exists in nature must have a ‘place’ or a role as part of some wider scheme, that nature should not indulge in profligacy by manifesting gratuitous entities, that nature should not be arbitrary. Each facet of physical reality should link in with the others in a ‘natural’ and logical way. Thus, when the particle known as the muon was discovered in 1937, the physicist Isidor Rabi was astonished. ‘Who ordered that?’ he exclaimed. (Paul Davies, The Mind of God: The Scientific Basis for a Rational World, pp. 209-10.

Ultimately, however, one cannot fully discuss the goals or ends of the universe without exploring the notion of Ideal Forms — that is, a blueprint for all things to follow or aspire to. The subject of Ideal Forms will be discussed in my next post.

What Does Science Explain? Part 3 – The Mythos of Objectivity

In parts one and two of my series “What Does Science Explain?,” I contrasted the metaphysics of the medieval world with the metaphysics of modern science. The metaphysics of modern science, developed by Kepler, Galileo, Descartes, and Newton, asserted that the only true reality was mathematics and the shape, motion, and solidity of objects, all else being subjective sensations existing solely within the human mind. I pointed out that the new scientific view was valuable in developing excellent predictive models, but that scientists made a mistake in elevating a method into a metaphysics, and that the limitations of the metaphysics of modern science called for a rethinking of the modern scientific worldview. (See The Metaphysical Foundations of Modern Science by Edwin Arthur Burtt.)

Early scientists rejected the medieval worldview that saw human beings as the center and summit of creation, and this rejection was correct with regard to astronomical observations of the position and movement of the earth. But the complete rejection of medieval metaphysics with regard to the role of humanity in the universe led to a strange division between theory and practice in science that endures to this day. The value and prestige of science rests in good part on its technological achievements in improving human life. But technology has a two-sided nature, a destructive side as well as a creative side. Aspects of this destructive side include automatic weaponry, missiles, conventional explosives, nuclear weapons, biological weapons, dangerous methods of climate engineering, perhaps even a threat from artificial intelligence. Even granting the necessity of the tools of violence for deterrence and self-defense, there remains the question of whether this destructive technology is going too far and slipping out of our control. So far the benefits of good technology have outweighed the hazards of destructive technology, but what research guidance is offered to scientists when human beings are removed from their high place in the universe and human values are separated from the “real” world of impersonal objects?

Consider the following question: Why do medical scientists focus their research on the treatment and cure of illness in humans rather than the treatment and cure of illness in cockroaches or lizards? This may seem like a silly question, but there’s no purely objective, scientific reason to prefer one course of research over another; the metaphysics of modern science has already disregarded the medieval view that humans have a privileged status in the universe. One could respond by arguing that human beings have a common self-interest in advancing human health through medical research, and this self-interest is enough. But what is the scientific justification for the pursuit of self-interest, which is not objective anyway? Without a recognition of the superior value of human life, medical science has no research guidance.

Or consider this: right now, astronomers are developing and employing advanced technologies to detect other worlds in the galaxy that may have life. The question of life on other planets has long interested astronomers, but it was impossible with older technologies to adequately search for life. It would be safe to say that the discovery of life on another planet would be a landmark development in science, and the discovery of intelligent life on another planet would be an astonishing development. The first scientist who discovered a world with intelligent life would surely win awards and fame. And yet, we already have intelligent life on earth and the metaphysics of modern science devalues it. In practice, of course, most scientists do value human life; the point is, the metaphysics behind science doesn’t, leaving scientists at a loss for providing an intellectual justification for a research program that protects and advances human life.

A second limitation of modern science’s metaphysics, closely related to the first, is its disregard of certain human sensations in acquiring knowledge. Early scientists promoted the view that only the “primary qualities” of mathematics, shape, size, and motion were real, while the “secondary qualities” of color, taste, smell, and sound existed only in the mind. This distinction between primary and secondary qualities was criticized at the time by philosophers such as George Berkeley, a bishop of the Anglican Church. Berkeley argued that the distinction between primary and secondary qualities was false and that even size, shape, and motion were relative to the perceptions and judgment of observers. Berkeley also opposed Isaac Newton’s theory that space and time were absolute entities, arguing instead that these were ideas rooted in human sensations. But Berkeley was disregarded by scientists, largely because Newton offered predictive models of great value.

Three hundred years later, Isaac Newton’s models retain their great value and are still widely used — but it is worth noting that Berkeley’s metaphysics has actually proved superior in many respects to Newton’s metaphysics.

Consider the nature of mathematics. For many centuries mathematicians believed that mathematical objects were objectively real and certain and that Euclidean geometry was the one true geometry. However, the discovery of non-Euclidean geometries in the nineteenth century shook this assumption, and mathematicians had to reconcile themselves to the fact that it was possible to create multiple geometries of equal validity. There were differences between the geometries in terms of their simplicity and their ability to solve particular problems, but no one geometry was more “real” than the others.

If you think about it, this should not be surprising. The basic objects of geometry — points, lines, and planes — aren’t floating around in space waiting for you to take note of them. They are concepts, creations of the human brain. We may see particular objects that resemble points, lines, and planes, but space itself has no visible content; we have to add content to it.  And we have a choice in what content to use. It is possible to create a geometry in which all lines are straight or all lines are curved; in which some lines are parallel or no lines are parallel;  or in which lines are parallel over a finite distance but eventually meet at some infinitely great distance. It is also possible to create a geometry with axioms that assume no lines, only points; or a geometry that assumes “regions” rather than points. So the notion that mathematics is a “primary quality” that exists within objects independent of human minds is a myth. (For more on the imaginary qualities of mathematics, see my previous posts here and here.)

But aside from the discovery of multiple mathematical systems, what has really killed the artificial distinction between “primary qualities,” allegedly objective, and “secondary qualities,” allegedly subjective, is modern science itself, particularly in the findings of relativity theory and quantum mechanics.

According to relativity theory, there is no single, objectively real size, shape, or motion of objects — these qualities are all relative to an observer in a particular reference frame (say, at the same location on earth, in the same vehicle, or in the same rocket ship). Contrary to some excessive and simplistic views, relativity theory does NOT mean that any and all opinions are equally valid. In fact, all observers within the same reference frame should be seeing the same thing and their measurements should match. But observers in different reference frames may have radically different measurements of the size, shape, and motion of an object, and there is no one single reference frame that is privileged — they are all equally valid.

Consider the question of motion. How fast are you moving right now? Relative to your computer or chair, you are probably still. But the earth is rotating at 1040 miles per hour, so relative to an observer on the moon, you would be moving at that speed — adjusting for the fact that the moon is also orbiting around the earth at 2288 miles per hour. But also note that the earth is orbiting the sun at 66,000 miles per hour, our solar system is orbiting the galaxy at 52,000 miles per hour, and our galaxy is moving at 1,200,000 miles per hour; so from the standpoint of an observer in another galaxy you are moving at a fantastically fast speed in a series of crazy looping motions. Isaac Newton argued that there was an absolute position in space by which your true, objective speed could be measured. But Einstein dismissed that view, and the scientific consensus today is that Einstein was right — the answer to the question of how fast you are moving is relative to the location and speed of the observer.

The relativity of motion was anticipated by the aforementioned George Berkeley as early as the eighteenth century, in his Treatise Concerning the Principles of Human Knowledge (paragraphs 112-16). Berkeley’s work was subsequently read by the physicist Ernest Mach, who subsequently influenced Einstein.

Relativity theory also tells us that there is no absolute size and shape, that these also vary according to the frame of reference of an observer in relation to what is observed. An object moving at very fast speeds relative to an observer will be shortened in length, which also affects its shape. (See the examples here and here.) What is the “real” size and shape of the object? There is none — you have to specify the reference frame in order to get an answer. Professor Richard Wolfson, a physicist at Middlebury College who has a great lecture series on relativity theory, explains what happens at very fast speeds:

An example in which length contraction is important is the Stanford Linear Accelerator, which is 2 miles long as measured on Earth, but only about 3 feet long to the electrons moving down the accelerator at 0.9999995c [nearly the speed of light]. . . . [Is] the length of the Stanford Linear Accelerator ‘really’ 2 miles? No! To claim so is to give special status to one frame of reference, and that is precisely what relativity precludes. (Course Guidebook to Einstein’s Relativity and the Quantum Revolution, Lecture 10.)

In fact, from the perspective of a light particle (a photon), there is infinite length contraction — there is no distance and the entire universe looks like a point!

The final nail in the coffin of the metaphysics of modern science is surely the weird world of quantum physics. According to quantum physics, particles at the subatomic level do not occupy only one position at a particular moment of time but can exist in multiple positions at the same time — only when the subatomic particles are observed do the various possibilities “collapse” into a single outcome. This oddity led to the paradoxical thought experiment known as “Schrodinger’s Cat” (video here). The importance of the “observer effect” to modern physics is so great that some physicists, such as the late physicist John Wheeler, believed that human observation actually plays a role in shaping the very reality of the universe! Stephen Hawking holds a similar view, arguing that our observation “collapses” multiple possibilities into a single history of the universe: “We create history by our observation, rather than history creating us.” (See The Grand Design, pp. 82-83, 139-41.) There are serious disputes among scientists about whether uncertainties at the subatomic level really justify the multiverse theories of Wheeler and Hawking, but that is another story.

Nevertheless, despite the obsolescence of the metaphysical premises of modern science, when scientists talk about the methods of science, they still distinguish between the reality of objects and the unreality of what exists in the mind, and emphasize the importance of being objective at all times. Why is that? Why do scientists still use a metaphysics developed centuries ago by Kepler, Galileo, and Newton? I think this practice persists largely because the growth of knowledge since these early thinkers has led to overspecialization — if one is interested in science, one pursues a degree in chemistry, biology, or physics; if one is interested in metaphysics, one pursues a degree in philosophy. Scientists generally aren’t interested in or can’t understand what philosophers have to say, and philosophers have the same view of scientists. So science carries on with a metaphysics that is hundreds of years old and obsolete.

It’s true that the idea of objectivity was developed in response to the very real problem of the uncertainty of human sense impressions and the fallibility of the conclusions our minds draw in response to those sense impressions. Sometimes we think we see something, but we don’t. People make mistakes, they may see mirages; in extreme cases, they may hallucinate. Or we see the same thing but have different interpretations. Early scientists tried to solve this problem by separating human senses and the human mind from the “real” world of objects. But this view was philosophically dubious to begin with and has been refuted by science itself. So how do we resolve the problem of mistaken and differing perceptions and interpretations?

Well, we supplement our limited senses and minds with the senses and minds of other human beings. We gather together, we learn what others have perceived and concluded, we engage in dialogue and debate, we conduct repeated observations and check our results with the results of others. If we come to an agreement, then we have a tentative conclusion; if we don’t agree, more observation, testing, and dialogue is required to develop a picture that resolves the competing claims. In some cases we may simply end up with an explanation that accounts for why we come up with different conclusions — perhaps we are in different locations, moving at different speeds, or there is something about our sensory apparatus that causes us to sense differently. (There is an extensive literature in science about why people see colors differently due to the nature of the eye and brain.)

Central to the whole process of science is a common effort — but there is also the necessity of subduing one’s ego, acknowledging that not only are there other people smarter than we are, but that the collective efforts of even less-smart people are greater than our own individual efforts. Subduing one’s ego is also required in order to prepare for the necessity of changing one’s mind in response to new evidence and arguments. Ultimately, the search for knowledge is a social and moral enterprise. But we are not going to succeed in that endeavor by positing a reality separate from human beings and composed only of objects. (Next: Part 4)

What Does Science Explain? Part 2 – The Metaphysics of Modern Science

In my previous post, I discussed the nature of metaphysics, a theory of being and existence, in the medieval world. The metaphysics of the medieval period was strongly influenced by the ancient Greeks, particularly Aristotle, who posited four causes or explanations for why things were. In addition, Aristotle argued that existence could be understood as the result of a transition from “potentiality” to “actuality.” With the rise of modern science, argued Edwin Arthur Burtt in The Metaphysical Foundations of Modern Science, the medieval conception of existence changed. Although some of this change was beneficial, argued Burtt, there was also a loss.

The first major change that modern science brought about was the strict separation of human beings, along with human senses and desires, from the “real” universe of impersonal objects joining, separating, and colliding with each other. Rather than seeing human beings as the center or summit of creation, as the medievals did, modern scientists removed the privileged position of human beings and promoted the goal of “objectivity” in their studies, arguing that we needed to dismiss all subjective human sensations and look at objects as they were in themselves. Kepler, Galileo, and Newton made a sharp distinction between the “primary qualities” of objects and “secondary qualities,” arguing that only primary qualities were truly real, and therefore worth studying. What were the “primary qualities?”: quantity/mathematics, motion, shape, and solidity. These qualities existed within objects and were independent of human perception and sensation. The “secondary qualities” were color, taste, smell, and sound; these were subjective because they were derived from human sensations, and therefore did not provide objective facts that could advance knowledge.

The second major change that modern science brought to metaphysics was a dismissal of the medieval world’s rich and multifaceted concept of causation in favor of a focus on “efficient causation” (the impact of one object or event on another). The concept of “final causation,” that is, goal-oriented development, was neglected. In addition, the concept of “formal causation,” that is, the emergence of things out of universal forms, was reduced to mathematics; only mathematical forms expressed in the “laws of nature,” were truly real, according to the new scientific worldview. Thus, all causation was reduced to mathematical “laws of nature” directing the motion and interaction of objects.

The consequences of this new worldview were tremendous in terms of altering humanity’s conception of reality and what it meant to explain reality. According to Burtt, “From now on, it is a settled assumption for modern thought in practically every field, that to explain anything is to reduce it to its elementary parts, whose relations, where temporal in character, are conceived in terms of efficient causality solely.” (Metaphysics of Modern Science, p. 134) And although the early giants of science — Kepler, Galileo, and Newton — believed in God, their conception of God was significantly different from the medieval view. Rather than seeing God as the Supreme Good, the goal or end which continually brought all things from potentiality to actuality, they saw God in terms of the “First Efficient Cause” only. That is, God brought the laws of nature into existence, and then the universe operated like a clock or machine, which might then only occasionally need rewinding or maintenance. But once this conception of God became widespread, it was not long before people questioned whether God was necessary at all to explain the universe.

Inarguably, there were great advantages to the metaphysical views of early scientists. By focusing on mathematical models and efficient causes, while pruning away many of the non-calculable qualities of natural phenomena, scientists were able to develop excellent predictive models. Descartes gave up the study of “final causes” and focused his energies on mathematics because he felt no one could discern God’s purposes, a view adopted widely by subsequent scientists. Both Galileo and Newton put great emphasis on the importance of observation and experimentation in the study of nature, which in many cases put an end to abstract philosophical speculations on natural phenomena that gave no definite conclusions. And Newton gave precise meanings to previously vague terms like “force” and “mass,” meanings that allowed measurement and calculation.

The mistake that these early scientists made, however, was to elevate a method into a metaphysics, by proclaiming that what they studied was the only true reality, with all else existing solely in the human mind. According to Burtt,

[T]he great Newton’s authority was squarely behind that view of the cosmos which saw in man a puny, irrelevant spectator . . . of the vast mathematical system whose regular motions according to mechanical principles constituted the world of nature. . . . The world that people had thought themselves living in — a world rich with colour and sound, redolent with fragrance, filled with gladness, love and beauty, speaking everywhere of purposive harmony and creative ideals — was crowded now into minute corners in the brains of scattered organic beings. The really important world outside was a world hard, cold, colourless, silent, and dead; a world of quantity, a world of mathematically computable motions in mechanical regularity.  (pp. 238-9)

Even at the time this new scientific metaphysics was being developed, it was critiqued on various grounds by philosophers such as Leibniz, Hume, and Berkeley. These philosophers’ critiques had little long-term impact, probably because scientists offered working predictive models and philosophers did not. But today, even as science is promising an eventual “theory of everything,” the limitations of the metaphysics of modern science is causing even some scientists to rethink the whole issue of causation and the role of human sensations in developing knowledge. The necessity for rethinking the modern scientific view of metaphysics will be the subject of my next post.

What Does Science Explain? Part 1 – What is Causation?

In previous posts, I have argued that science has been excellent at creating predictive models of natural phenomena. From the origins of the universe, to the evolution of life, to chemical reactions, and the building of technological devices, scientists have learned to predict causal sequences and manipulate these causal sequences for the benefit (or occasionally, detriment) of humankind. These models have been stupendous achievements of civilization, and religious texts and institutions simply cannot compete in terms of offering predictive models.

There remains the issue, however, of whether the predictive models of science really explain all that there is to explain. While many are inclined to believe that the models of science explain everything, or at least everything that one needs to know, there are actually some serious disputes even among scientists about what causation is, what a valid explanation is, whether predictive models need to be realistic, and how real are some of the entities scientists study, such as the “laws of nature” and the mathematics that are often part of those laws.

The fundamental issues of causation, explanation, and reality are discussed in detail in a book published in 1954 entitled: The Metaphysical Foundations of Modern Science by Edwin Arthur Burtt. According to Burtt, the birth and growth of modern science came with the development of a new metaphysics, that is, the study of being and existence. Copernicus, Kepler, Galileo, and Newton all played a role in creating this new metaphysics, and it shapes how we view the world to this day.

In order to understand Burtt’s thesis, we need to back up a bit and briefly discuss the state of metaphysics before modern science — that is, medieval metaphysics. The medieval view of the world in the West was based largely on Christianity and the ancient Greek philosophers such as Aristotle, who wrote treatises on both physics and metaphysics.

Aristotle wrote that there were four types of answers to the question “why?” These answers were described by Aristotle as the “four causes,” though it has been argued that the correct translation of the Greek word that Aristotle used is “explanation” rather than “cause.” These are:

(1) Material cause

(2) Formal cause

(3) Efficient (or moving) cause

(4) Final cause

“Material cause” refers to changes that take place as a result of the material that something is made of. If a substance melts at a particular temperature, one can argue that it is the material nature of that substance that causes it to melt at that temperature. (The problem with this kind of explanation is that it is not very deep — one can then ask why a material behaves as it does.)

“Formal cause” refers to the changes that take place in matter because of the form that an object is destined to have. According to Aristotle, all objects share the same matter — it is the arrangement of matter into their proper forms that causes matter to become a rock, a tree, a bird, or a human being. Objects and living things eventually disintegrate and perish, but the forms are eternal, and they shape matter into new objects and living things that replace the old. The idea of formal causation is rooted in Plato’s theory of forms, though Aristotle modified Plato’s theory in a number of ways.

“Efficient cause” refers to the change that takes place when one object impacts another; one object or event is the cause, the other is the effect. A stick hitting a ball, a saw cutting wood, and hydrogen atoms interacting with oxygen atoms to create water are all examples of efficient causes.

“Final cause” refers to the goal, end, or purpose of a thing — the Greek word for goal is “telos.” An acorn grows into an oak tree because that is the goal or telos of an acorn. Likewise, a fertilized human ovum becomes a human being. In nature, birds fly, rain nourishes plants, and the moon orbits the earth, because nature has intended certain ends for certain things. The concept of a “final cause” is intimately related to the “formal cause,” in the sense that the forms tend to provide the ends that matter pursues.

Related to these four causes or explanations is Aristotle’s notion of potentiality and actuality. Before things come into existence, one can say that there is potential; when these things come into existence they are actualized. Hydrogen atoms and oxygen atoms have the potential to become water if they are joined in the right way, but until they are so joined, there is only potential water, not actual water. A block of marble has the potential to become a statue, but it is not actually a statue until a sculptor completes his or her work. A human being is potentially wise if he or she pursues knowledge, but until that pursuit of knowledge is carried out, there is only potentiality and not actuality. The forms and telos of nature are primarily responsible for the transformation of potentiality into actuality.

Two other aspects of the medieval view of metaphysics are worth noting. First, for the medievals, human beings were the center of the universe, the highest end of nature. Stars, planets, trees, animals, chemicals, were lower forms of being than humans and existed for the benefit of humans. Second, God was not merely the first cause of the universe — God was the Supreme Good, the goal or telos to which all creation was drawn in pursuit of its final goals and perfection. According to Burtt,

When medieval philosophers thought of what we call the temporal process it was this continuous transformation of potentiality into actuality that they had in mind. . . . God was the One who eternally exists, and ever draws into movement by his perfect beauty all that is potentially the bearer of a higher existence. He is the divine harmony of all goods, conceived as now realized in ideal activity, eternally present, himself unmoved, yet the mover of all change. (Burtt, The Metaphysical Foundations of Modern Science, pp. 94-5)

The rise of modern science, according to Burtt, led to a radical change in humanity’s metaphysical views. A great deal of this change was beneficial, in the sense that it led to predictive models that successfully answered certain questions about natural processes that were previously mysterious. However, as Burtt noted, the new metaphysics of science was also a straitjacket that constricted humanity’s pursuit of knowledge. Some human senses were unjustifiably dismissed as unreliable or deceptive and some types of causation were swept away unnecessarily. How modern science created a new metaphysics that changed humanity’s conception of reality will be discussed in part two.