What Does Science Explain? Part 4 – The Ends of the Universe

Continuing my series of posts on “What Does Science Explain?” (parts 1, 2 , and 3 here), I wish today to discuss the role of teleological causation. Aristotle referred to teleology in his discussion of four causes as “final causation,” because it referred to the goals or ends of all things (the Greek word “telos” meaning “goal,” “purpose,” or “end.”) From a teleological viewpoint, an acorn grows into an oak tree, a bird takes flight, and a sculptor creates statues because these are the inherent and intended ends of the acorn, bird, and sculptor. Medieval metaphysics granted a large role for teleological causation in its view of the universe.

According to E.A. Burtt in The Metaphysics of Modern Science, the growth of modern science changed the idea of causation, focusing almost exclusively on efficient causation (objects impacting or affecting other objects). The idea of final (goal-oriented) causation was dismissed. And even though the early modern scientists such as Galileo and Newton believed in God, their notion of God was significantly different from the traditional medieval conception of God. Rather than seeing God as the Supreme Good, which continually draws all things to higher levels of being, early modern scientists reduced God to the First Efficient Cause, who merely started the mechanism of the universe and then let it run.

It was not unreasonable for early scientists to focus on efficient causation rather than final causation. It was often difficult to come up with testable hypotheses and workable predictive models by assuming long-term goals in nature. There was always a strong element of mystery about what the true ends of nature were and it was very difficult to pin down these alleged goals. Descartes believed in God, but also wrote that it was impossible to know what God’s goals were. For that reason, it is quite likely that science in its early stages needed to overcome medieval metaphysics in order to make its first great discoveries about nature. Focusing on efficient causation was simpler and apt to bring quicker results.

However, now that science has advanced over the centuries, it is worth revisiting the notion of teleological causation as a means of filling in gaps in our current understanding of nature. It is true that the concept of long-term goals for physical objects and forces often does not help very much in terms of developing useful, short-term predictive models. But final causation can help make sense of long-term patterns which may not be apparent when making observations over short periods of time. Processes that look purposeless and random in the short-term may actually be purposive in the long-term. We know that an acorn under the right conditions will eventually become an oak tree, because the process and the outcome of development can be observed within a reasonable period of time and that knowledge has been passed on to us. If our knowledge base began at zero and we came across an acorn for the first time, we would find it extremely difficult to predict the long-term future of that acorn merely by cutting it up and examining it under a microscope.

So, does the universe have long-term, goal-oriented patterns that may be hidden among the short-term realities of contingency and randomness? A number of physicists began to speculate that this was the case in the late twentieth century, when their research indicated that the physical forces and constants of the universe can exist in only a very narrow range of possibilities in order for life to be possible, or even for the universe to exist. Change in even one of the forces or constants could make life impossible or cause the universe to self-destruct in a short period of time. In this view, the evolution of the universe and of life on earth has been subject to a great deal of randomness, but the cosmic structure and conditions that made evolution possible are not at all random. As the physicist Freeman Dyson has noted:

It is true that we emerged in the universe by chance, but the idea of chance is itself only a cover for our ignorance. . . . The more I examine the universe and study the details of its architecture, the more evidence I find that the universe in some sense must have known that we were coming. (Disturbing the Universe, p. 250)

In what way did the universe “know we were coming?” Consider the fact that in the early universe after the Big Bang, the only elements that existed were the “light” elements hydrogen and helium, along with trace amounts of lithium and beryllium. A universe with only four elements would certainly be simple, but there would not be much to build upon. Life, at least as we know it, requires not just hydrogen but at a minimum carbon, oxygen, nitrogen, phosphorus, and sulfur. How did these and other heavier elements come into being? Stars produced them, through the process of fusion. In fact, stars have been referred to as the “factories” of heavy elements. Human beings today consist primarily of oxygen, followed by carbon, hydrogen, nitrogen, calcium, and phosphorous. Additional elements compose less than one percent of the human body, but even most of these elements are essential to human life. Without the elements produced earlier by stars we would not be here. It has been aptly said that human beings are made of “stardust.”

So why did stars create the heavier elements? After all, the universe could have gotten along quite well without additional elements. Was it random chance that created the heavy elements? Not really. Random chance plays a role in many natural events, but the creation of heavy elements in stars requires some precise conditions — it is not just a churning jumble of subatomic particles. The astronomer Fred Hoyle was the first scientist to study how stars made heavy elements, and he noted that the creation of heavy elements required very specific values in order for the process to work. When he concluded his research Hoyle remarked, “A common sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature. The numbers one calculates from the facts seem to me so overwhelming as to put this conclusion almost beyond question.”

The creation of heavier elements by the stars does not necessarily mean that the universe intended specifically to create human beings, but it does seem to indicate that the universe somehow “knew” that heavy elements would be required to create higher forms of being, above and beyond the simple and primitive elements created by the Big Bang. In that sense, creating life is plausibly a long-term goal of the universe.

And what about life itself? Does it make sense to use teleology to study the behavior of life forms? Biologist Peter Corning has argued that while science has long pursued reductionist explanations of phenomena, it is impossible to really know biological systems without pursuing holistic explanations centered on the purposive behavior of organisms.

According to reductionism, all things can be explained by the parts that they are made of — human beings are made of tissues and organs, which are made of cells, which are made of chemical compounds, which are made of atoms, which are made of subatomic particles. In the view of many scientists, everything about human beings can in principle be explained by actions at the subatomic level. Peter Corning, however, argues that this conception is mistaken. Reductionism is necessary for partially explaining biological systems, but it is not sufficient. The reason for this is that the wholes are greater than the parts, and the behavior of wholes often has characteristics that are radically different from the parts that they are made of. For example, it would be dangerous to add pure hydrogen or oxygen to a fire, but when hydrogen atoms and oxygen atoms are combined in the right way — as H2O — one obtains a chemical compound that is quite useful for extinguishing fires. The characteristics of the molecule are different from the characteristics of the atoms in it. Likewise, at the subatomic level, particles may have no definite position in space and can even be said to exist in multiple places at once; but human beings only exist in one place at a time, despite the fact that human beings are made of subatomic particles. The behavior of the whole is different from the behavior of the parts. The transformation of properties that occurs when parts form new wholes is known as “emergence.”

Corning notes that when one incorporates analysis of wholes into theoretical explanation, there is goal-oriented “downward causation” as well as “upward causation.” For example, a bird seeks the goal of food and a favorable environment, so when it begins to get cold, that bird flies thousands of miles to a warmer location for the winter. The atoms that make up that bird obviously go along for the ride, but a scientist can’t use the properties of the atoms to predict the flight of these atoms; only by looking at the properties of the bird as a whole can a scientist predict what the atoms making up the bird are going to do. The bird as a whole doesn’t have complete control over the atoms composing its body, but it clearly has some control. Causation goes down as well as up. Likewise, neuropsychologist Roger Sperry has argued that human consciousness is a whole that influences the parts of the brain and body just as the parts of the brain and body influence the consciousness: “[W]e contend that conscious or mental phenomena are dynamic, emergent, pattern (or configurational) properties of the living brain in action . . . these emergent pattern properties in the brain have causal control potency. . . ” (“Mind, Brain, and Humanist Values,” Bulletin of the Atomic Scientists, Sept 1966) In Sperry’s view, the values created by the human mind influence human behavior as much as the atoms and chemicals in the human body and brain.

Science has traditionally viewed the evolution of the universe as upward causation only, with smaller parts joining into larger wholes as a result of the laws of nature and random chance. This view of causation is illustrated in the following diagram:

reductionism

But if we take seriously the notion of emergence and purposive action, we have a more complex picture, in which the laws of nature and random chance constrain purposive action and life forms, but do not entirely determine the actions of life forms — i.e., there is both upward and downward causation:

reductionism_and_holism

It is important to note that this new view of causation does not eliminate the laws of nature — it just sets limits on what the laws of nature can explain. Specifically, the laws of nature have their greatest predictive power when we are dealing with the simplest physical phenomena; the complex wholes that are formed by the evolutionary process are less predictable because they can to some extent work around the laws of nature by employing the new properties that emerge from the joining of parts. For example, it is relatively easy to predict the motion of objects in the solar system by using the laws of nature; it is not so easy to predict the motion of life forms because life forms have properties that go beyond the simple properties possessed by objects in the solar system. As Robert Pirsig notes in Lila, life can practically be defined by its ability to transcend or work around the static patterns of the laws of nature:

The law of gravity . . . is perhaps the most ruthlessly static pattern of order in the universe. So, correspondingly, there is no single living thing that does not thumb its nose at that law day in and day out. One could almost define life as the organized disobedience of the law of gravity. One could show that the degree to which an organism disobeys this law is a measure of its degree of evolution. Thus, while the single protozoa just barely get around on their cilia, earthworms manage to control their distance and direction, birds fly into the sky, and man goes all the way to the moon. (Lila (1991), p. 143.

Many scientists still resist the notion of teleological causation. But it could be argued that even scientists who vigorously deny that there is any purpose in the universe actually have an implicit teleology. Their teleology is simply the “laws of nature” themselves, and either the inner goal of all things is to follow those laws, or it is the goal of the laws to compel all things to follow their commands. Other implicit teleologies can be found in scientists’ assumptions that nature is inherently simple; that mathematics is the language of nature; or that all the particles and forces in the nature play some necessary role. According to physicist Paul Davies,

There is . . . an unstated but more or less universal feeling among physicists that everything that exists in nature must have a ‘place’ or a role as part of some wider scheme, that nature should not indulge in profligacy by manifesting gratuitous entities, that nature should not be arbitrary. Each facet of physical reality should link in with the others in a ‘natural’ and logical way. Thus, when the particle known as the muon was discovered in 1937, the physicist Isidor Rabi was astonished. ‘Who ordered that?’ he exclaimed. (Paul Davies, The Mind of God: The Scientific Basis for a Rational World, pp. 209-10.

Ultimately, however, one cannot fully discuss the goals or ends of the universe without exploring the notion of Ideal Forms — that is, a blueprint for all things to follow or aspire to. The subject of Ideal Forms will be discussed in my next post.

What Does Science Explain? Part 3 – The Mythos of Objectivity

In parts one and two of my series “What Does Science Explain?,” I contrasted the metaphysics of the medieval world with the metaphysics of modern science. The metaphysics of modern science, developed by Kepler, Galileo, Descartes, and Newton, asserted that the only true reality was mathematics and the shape, motion, and solidity of objects, all else being subjective sensations existing solely within the human mind. I pointed out that the new scientific view was valuable in developing excellent predictive models, but that scientists made a mistake in elevating a method into a metaphysics, and that the limitations of the metaphysics of modern science called for a rethinking of the modern scientific worldview. (See The Metaphysical Foundations of Modern Science by Edwin Arthur Burtt.)

Early scientists rejected the medieval worldview that saw human beings as the center and summit of creation, and this rejection was correct with regard to astronomical observations of the position and movement of the earth. But the complete rejection of medieval metaphysics with regard to the role of humanity in the universe led to a strange division between theory and practice in science that endures to this day. The value and prestige of science rests in good part on its technological achievements in improving human life. But technology has a two-sided nature, a destructive side as well as a creative side. Aspects of this destructive side include automatic weaponry, missiles, conventional explosives, nuclear weapons, biological weapons, dangerous methods of climate engineering, perhaps even a threat from artificial intelligence. Even granting the necessity of the tools of violence for deterrence and self-defense, there remains the question of whether this destructive technology is going too far and slipping out of our control. So far the benefits of good technology have outweighed the hazards of destructive technology, but what research guidance is offered to scientists when human beings are removed from their high place in the universe and human values are separated from the “real” world of impersonal objects?

Consider the following question: Why do medical scientists focus their research on the treatment and cure of illness in humans rather than the treatment and cure of illness in cockroaches or lizards? This may seem like a silly question, but there’s no purely objective, scientific reason to prefer one course of research over another; the metaphysics of modern science has already disregarded the medieval view that humans have a privileged status in the universe. One could respond by arguing that human beings have a common self-interest in advancing human health through medical research, and this self-interest is enough. But what is the scientific justification for the pursuit of self-interest, which is not objective anyway? Without a recognition of the superior value of human life, medical science has no research guidance.

Or consider this: right now, astronomers are developing and employing advanced technologies to detect other worlds in the galaxy that may have life. The question of life on other planets has long interested astronomers, but it was impossible with older technologies to adequately search for life. It would be safe to say that the discovery of life on another planet would be a landmark development in science, and the discovery of intelligent life on another planet would be an astonishing development. The first scientist who discovered a world with intelligent life would surely win awards and fame. And yet, we already have intelligent life on earth and the metaphysics of modern science devalues it. In practice, of course, most scientists do value human life; the point is, the metaphysics behind science doesn’t, leaving scientists at a loss for providing an intellectual justification for a research program that protects and advances human life.

A second limitation of modern science’s metaphysics, closely related to the first, is its disregard of certain human sensations in acquiring knowledge. Early scientists promoted the view that only the “primary qualities” of mathematics, shape, size, and motion were real, while the “secondary qualities” of color, taste, smell, and sound existed only in the mind. This distinction between primary and secondary qualities was criticized at the time by philosophers such as George Berkeley, a bishop of the Anglican Church. Berkeley argued that the distinction between primary and secondary qualities was false and that even size, shape, and motion were relative to the perceptions and judgment of observers. Berkeley also opposed Isaac Newton’s theory that space and time were absolute entities, arguing instead that these were ideas rooted in human sensations. But Berkeley was disregarded by scientists, largely because Newton offered predictive models of great value.

Three hundred years later, Isaac Newton’s models retain their great value and are still widely used — but it is worth noting that Berkeley’s metaphysics has actually proved superior in many respects to Newton’s metaphysics.

Consider the nature of mathematics. For many centuries mathematicians believed that mathematical objects were objectively real and certain and that Euclidean geometry was the one true geometry. However, the discovery of non-Euclidean geometries in the nineteenth century shook this assumption, and mathematicians had to reconcile themselves to the fact that it was possible to create multiple geometries of equal validity. There were differences between the geometries in terms of their simplicity and their ability to solve particular problems, but no one geometry was more “real” than the others.

If you think about it, this should not be surprising. The basic objects of geometry — points, lines, and planes — aren’t floating around in space waiting for you to take note of them. They are concepts, creations of the human brain. We may see particular objects that resemble points, lines, and planes, but space itself has no visible content; we have to add content to it.  And we have a choice in what content to use. It is possible to create a geometry in which all lines are straight or all lines are curved; in which some lines are parallel or no lines are parallel;  or in which lines are parallel over a finite distance but eventually meet at some infinitely great distance. It is also possible to create a geometry with axioms that assume no lines, only points; or a geometry that assumes “regions” rather than points. So the notion that mathematics is a “primary quality” that exists within objects independent of human minds is a myth. (For more on the imaginary qualities of mathematics, see my previous posts here and here.)

But aside from the discovery of multiple mathematical systems, what has really killed the artificial distinction between “primary qualities,” allegedly objective, and “secondary qualities,” allegedly subjective, is modern science itself, particularly in the findings of relativity theory and quantum mechanics.

According to relativity theory, there is no single, objectively real size, shape, or motion of objects — these qualities are all relative to an observer in a particular reference frame (say, at the same location on earth, in the same vehicle, or in the same rocket ship). Contrary to some excessive and simplistic views, relativity theory does NOT mean that any and all opinions are equally valid. In fact, all observers within the same reference frame should be seeing the same thing and their measurements should match. But observers in different reference frames may have radically different measurements of the size, shape, and motion of an object, and there is no one single reference frame that is privileged — they are all equally valid.

Consider the question of motion. How fast are you moving right now? Relative to your computer or chair, you are probably still. But the earth is rotating at 1040 miles per hour, so relative to an observer on the moon, you would be moving at that speed — adjusting for the fact that the moon is also orbiting around the earth at 2288 miles per hour. But also note that the earth is orbiting the sun at 66,000 miles per hour, our solar system is orbiting the galaxy at 52,000 miles per hour, and our galaxy is moving at 1,200,000 miles per hour; so from the standpoint of an observer in another galaxy you are moving at a fantastically fast speed in a series of crazy looping motions. Isaac Newton argued that there was an absolute position in space by which your true, objective speed could be measured. But Einstein dismissed that view, and the scientific consensus today is that Einstein was right — the answer to the question of how fast you are moving is relative to the location and speed of the observer.

The relativity of motion was anticipated by the aforementioned George Berkeley as early as the eighteenth century, in his Treatise Concerning the Principles of Human Knowledge (paragraphs 112-16). Berkeley’s work was subsequently read by the physicist Ernest Mach, who subsequently influenced Einstein.

Relativity theory also tells us that there is no absolute size and shape, that these also vary according to the frame of reference of an observer in relation to what is observed. An object moving at very fast speeds relative to an observer will be shortened in length, which also affects its shape. (See the examples here and here.) What is the “real” size and shape of the object? There is none — you have to specify the reference frame in order to get an answer. Professor Richard Wolfson, a physicist at Middlebury College who has a great lecture series on relativity theory, explains what happens at very fast speeds:

An example in which length contraction is important is the Stanford Linear Accelerator, which is 2 miles long as measured on Earth, but only about 3 feet long to the electrons moving down the accelerator at 0.9999995c [nearly the speed of light]. . . . [Is] the length of the Stanford Linear Accelerator ‘really’ 2 miles? No! To claim so is to give special status to one frame of reference, and that is precisely what relativity precludes. (Course Guidebook to Einstein’s Relativity and the Quantum Revolution, Lecture 10.)

In fact, from the perspective of a light particle (a photon), there is infinite length contraction — there is no distance and the entire universe looks like a point!

The final nail in the coffin of the metaphysics of modern science is surely the weird world of quantum physics. According to quantum physics, particles at the subatomic level do not occupy only one position at a particular moment of time but can exist in multiple positions at the same time — only when the subatomic particles are observed do the various possibilities “collapse” into a single outcome. This oddity led to the paradoxical thought experiment known as “Schrodinger’s Cat” (video here). The importance of the “observer effect” to modern physics is so great that some physicists, such as the late physicist John Wheeler, believed that human observation actually plays a role in shaping the very reality of the universe! Stephen Hawking holds a similar view, arguing that our observation “collapses” multiple possibilities into a single history of the universe: “We create history by our observation, rather than history creating us.” (See The Grand Design, pp. 82-83, 139-41.) There are serious disputes among scientists about whether uncertainties at the subatomic level really justify the multiverse theories of Wheeler and Hawking, but that is another story.

Nevertheless, despite the obsolescence of the metaphysical premises of modern science, when scientists talk about the methods of science, they still distinguish between the reality of objects and the unreality of what exists in the mind, and emphasize the importance of being objective at all times. Why is that? Why do scientists still use a metaphysics developed centuries ago by Kepler, Galileo, and Newton? I think this practice persists largely because the growth of knowledge since these early thinkers has led to overspecialization — if one is interested in science, one pursues a degree in chemistry, biology, or physics; if one is interested in metaphysics, one pursues a degree in philosophy. Scientists generally aren’t interested in or can’t understand what philosophers have to say, and philosophers have the same view of scientists. So science carries on with a metaphysics that is hundreds of years old and obsolete.

It’s true that the idea of objectivity was developed in response to the very real problem of the uncertainty of human sense impressions and the fallibility of the conclusions our minds draw in response to those sense impressions. Sometimes we think we see something, but we don’t. People make mistakes, they may see mirages; in extreme cases, they may hallucinate. Or we see the same thing but have different interpretations. Early scientists tried to solve this problem by separating human senses and the human mind from the “real” world of objects. But this view was philosophically dubious to begin with and has been refuted by science itself. So how do we resolve the problem of mistaken and differing perceptions and interpretations?

Well, we supplement our limited senses and minds with the senses and minds of other human beings. We gather together, we learn what others have perceived and concluded, we engage in dialogue and debate, we conduct repeated observations and check our results with the results of others. If we come to an agreement, then we have a tentative conclusion; if we don’t agree, more observation, testing, and dialogue is required to develop a picture that resolves the competing claims. In some cases we may simply end up with an explanation that accounts for why we come up with different conclusions — perhaps we are in different locations, moving at different speeds, or there is something about our sensory apparatus that causes us to sense differently. (There is an extensive literature in science about why people see colors differently due to the nature of the eye and brain.)

Central to the whole process of science is a common effort — but there is also the necessity of subduing one’s ego, acknowledging that not only are there other people smarter than we are, but that the collective efforts of even less-smart people are greater than our own individual efforts. Subduing one’s ego is also required in order to prepare for the necessity of changing one’s mind in response to new evidence and arguments. Ultimately, the search for knowledge is a social and moral enterprise. But we are not going to succeed in that endeavor by positing a reality separate from human beings and composed only of objects. (Next: Part 4)

What Does Science Explain? Part 2 – The Metaphysics of Modern Science

In my previous post, I discussed the nature of metaphysics, a theory of being and existence, in the medieval world. The metaphysics of the medieval period was strongly influenced by the ancient Greeks, particularly Aristotle, who posited four causes or explanations for why things were. In addition, Aristotle argued that existence could be understood as the result of a transition from “potentiality” to “actuality.” With the rise of modern science, argued Edwin Arthur Burtt in The Metaphysical Foundations of Modern Science, the medieval conception of existence changed. Although some of this change was beneficial, argued Burtt, there was also a loss.

The first major change that modern science brought about was the strict separation of human beings, along with human senses and desires, from the “real” universe of impersonal objects joining, separating, and colliding with each other. Rather than seeing human beings as the center or summit of creation, as the medievals did, modern scientists removed the privileged position of human beings and promoted the goal of “objectivity” in their studies, arguing that we needed to dismiss all subjective human sensations and look at objects as they were in themselves. Kepler, Galileo, and Newton made a sharp distinction between the “primary qualities” of objects and “secondary qualities,” arguing that only primary qualities were truly real, and therefore worth studying. What were the “primary qualities?”: quantity/mathematics, motion, shape, and solidity. These qualities existed within objects and were independent of human perception and sensation. The “secondary qualities” were color, taste, smell, and sound; these were subjective because they were derived from human sensations, and therefore did not provide objective facts that could advance knowledge.

The second major change that modern science brought to metaphysics was a dismissal of the medieval world’s rich and multifaceted concept of causation in favor of a focus on “efficient causation” (the impact of one object or event on another). The concept of “final causation,” that is, goal-oriented development, was neglected. In addition, the concept of “formal causation,” that is, the emergence of things out of universal forms, was reduced to mathematics; only mathematical forms expressed in the “laws of nature,” were truly real, according to the new scientific worldview. Thus, all causation was reduced to mathematical “laws of nature” directing the motion and interaction of objects.

The consequences of this new worldview were tremendous in terms of altering humanity’s conception of reality and what it meant to explain reality. According to Burtt, “From now on, it is a settled assumption for modern thought in practically every field, that to explain anything is to reduce it to its elementary parts, whose relations, where temporal in character, are conceived in terms of efficient causality solely.” (Metaphysics of Modern Science, p. 134) And although the early giants of science — Kepler, Galileo, and Newton — believed in God, their conception of God was significantly different from the medieval view. Rather than seeing God as the Supreme Good, the goal or end which continually brought all things from potentiality to actuality, they saw God in terms of the “First Efficient Cause” only. That is, God brought the laws of nature into existence, and then the universe operated like a clock or machine, which might then only occasionally need rewinding or maintenance. But once this conception of God became widespread, it was not long before people questioned whether God was necessary at all to explain the universe.

Inarguably, there were great advantages to the metaphysical views of early scientists. By focusing on mathematical models and efficient causes, while pruning away many of the non-calculable qualities of natural phenomena, scientists were able to develop excellent predictive models. Descartes gave up the study of “final causes” and focused his energies on mathematics because he felt no one could discern God’s purposes, a view adopted widely by subsequent scientists. Both Galileo and Newton put great emphasis on the importance of observation and experimentation in the study of nature, which in many cases put an end to abstract philosophical speculations on natural phenomena that gave no definite conclusions. And Newton gave precise meanings to previously vague terms like “force” and “mass,” meanings that allowed measurement and calculation.

The mistake that these early scientists made, however, was to elevate a method into a metaphysics, by proclaiming that what they studied was the only true reality, with all else existing solely in the human mind. According to Burtt,

[T]he great Newton’s authority was squarely behind that view of the cosmos which saw in man a puny, irrelevant spectator . . . of the vast mathematical system whose regular motions according to mechanical principles constituted the world of nature. . . . The world that people had thought themselves living in — a world rich with colour and sound, redolent with fragrance, filled with gladness, love and beauty, speaking everywhere of purposive harmony and creative ideals — was crowded now into minute corners in the brains of scattered organic beings. The really important world outside was a world hard, cold, colourless, silent, and dead; a world of quantity, a world of mathematically computable motions in mechanical regularity.  (pp. 238-9)

Even at the time this new scientific metaphysics was being developed, it was critiqued on various grounds by philosophers such as Leibniz, Hume, and Berkeley. These philosophers’ critiques had little long-term impact, probably because scientists offered working predictive models and philosophers did not. But today, even as science is promising an eventual “theory of everything,” the limitations of the metaphysics of modern science is causing even some scientists to rethink the whole issue of causation and the role of human sensations in developing knowledge. The necessity for rethinking the modern scientific view of metaphysics will be the subject of my next post.

What Does Science Explain? Part 1 – What is Causation?

In previous posts, I have argued that science has been excellent at creating predictive models of natural phenomena. From the origins of the universe, to the evolution of life, to chemical reactions, and the building of technological devices, scientists have learned to predict causal sequences and manipulate these causal sequences for the benefit (or occasionally, detriment) of humankind. These models have been stupendous achievements of civilization, and religious texts and institutions simply cannot compete in terms of offering predictive models.

There remains the issue, however, of whether the predictive models of science really explain all that there is to explain. While many are inclined to believe that the models of science explain everything, or at least everything that one needs to know, there are actually some serious disputes even among scientists about what causation is, what a valid explanation is, whether predictive models need to be realistic, and how real are some of the entities scientists study, such as the “laws of nature” and the mathematics that are often part of those laws.

The fundamental issues of causation, explanation, and reality are discussed in detail in a book published in 1954 entitled: The Metaphysical Foundations of Modern Science by Edwin Arthur Burtt. According to Burtt, the birth and growth of modern science came with the development of a new metaphysics, that is, the study of being and existence. Copernicus, Kepler, Galileo, and Newton all played a role in creating this new metaphysics, and it shapes how we view the world to this day.

In order to understand Burtt’s thesis, we need to back up a bit and briefly discuss the state of metaphysics before modern science — that is, medieval metaphysics. The medieval view of the world in the West was based largely on Christianity and the ancient Greek philosophers such as Aristotle, who wrote treatises on both physics and metaphysics.

Aristotle wrote that there were four types of answers to the question “why?” These answers were described by Aristotle as the “four causes,” though it has been argued that the correct translation of the Greek word that Aristotle used is “explanation” rather than “cause.” These are:

(1) Material cause

(2) Formal cause

(3) Efficient (or moving) cause

(4) Final cause

“Material cause” refers to changes that take place as a result of the material that something is made of. If a substance melts at a particular temperature, one can argue that it is the material nature of that substance that causes it to melt at that temperature. (The problem with this kind of explanation is that it is not very deep — one can then ask why a material behaves as it does.)

“Formal cause” refers to the changes that take place in matter because of the form that an object is destined to have. According to Aristotle, all objects share the same matter — it is the arrangement of matter into their proper forms that causes matter to become a rock, a tree, a bird, or a human being. Objects and living things eventually disintegrate and perish, but the forms are eternal, and they shape matter into new objects and living things that replace the old. The idea of formal causation is rooted in Plato’s theory of forms, though Aristotle modified Plato’s theory in a number of ways.

“Efficient cause” refers to the change that takes place when one object impacts another; one object or event is the cause, the other is the effect. A stick hitting a ball, a saw cutting wood, and hydrogen atoms interacting with oxygen atoms to create water are all examples of efficient causes.

“Final cause” refers to the goal, end, or purpose of a thing — the Greek word for goal is “telos.” An acorn grows into an oak tree because that is the goal or telos of an acorn. Likewise, a fertilized human ovum becomes a human being. In nature, birds fly, rain nourishes plants, and the moon orbits the earth, because nature has intended certain ends for certain things. The concept of a “final cause” is intimately related to the “formal cause,” in the sense that the forms tend to provide the ends that matter pursues.

Related to these four causes or explanations is Aristotle’s notion of potentiality and actuality. Before things come into existence, one can say that there is potential; when these things come into existence they are actualized. Hydrogen atoms and oxygen atoms have the potential to become water if they are joined in the right way, but until they are so joined, there is only potential water, not actual water. A block of marble has the potential to become a statue, but it is not actually a statue until a sculptor completes his or her work. A human being is potentially wise if he or she pursues knowledge, but until that pursuit of knowledge is carried out, there is only potentiality and not actuality. The forms and telos of nature are primarily responsible for the transformation of potentiality into actuality.

Two other aspects of the medieval view of metaphysics are worth noting. First, for the medievals, human beings were the center of the universe, the highest end of nature. Stars, planets, trees, animals, chemicals, were lower forms of being than humans and existed for the benefit of humans. Second, God was not merely the first cause of the universe — God was the Supreme Good, the goal or telos to which all creation was drawn in pursuit of its final goals and perfection. According to Burtt,

When medieval philosophers thought of what we call the temporal process it was this continuous transformation of potentiality into actuality that they had in mind. . . . God was the One who eternally exists, and ever draws into movement by his perfect beauty all that is potentially the bearer of a higher existence. He is the divine harmony of all goods, conceived as now realized in ideal activity, eternally present, himself unmoved, yet the mover of all change. (Burtt, The Metaphysical Foundations of Modern Science, pp. 94-5)

The rise of modern science, according to Burtt, led to a radical change in humanity’s metaphysical views. A great deal of this change was beneficial, in the sense that it led to predictive models that successfully answered certain questions about natural processes that were previously mysterious. However, as Burtt noted, the new metaphysics of science was also a straitjacket that constricted humanity’s pursuit of knowledge. Some human senses were unjustifiably dismissed as unreliable or deceptive and some types of causation were swept away unnecessarily. How modern science created a new metaphysics that changed humanity’s conception of reality will be discussed in part two.

The Use of Fiction and Falsehood in Science

Astrophysicist Neil deGrasse Tyson has some interesting and provocative things to say about religion in a recent interview. I tend to agree with Tyson that religions have a number of odd or even absurd beliefs that are contrary to science and reason. One statement by Tyson, however, struck me as inaccurate. According to Tyson, “[T]here are religions and belief systems, and objective truths. And if we’re going to govern a country, we need to base that governance on objective truths — not your personal belief system.” (The Daily Beast)

I have a great deal of respect for Tyson as a scientist, and Tyson clearly knows more about physics than I do. But I think his understanding of what scientific knowledge provides is naïve and unsupported by history and present day practice. The fact of the matter is that scientists also have belief systems, “mental models” of how the world works. These mental models are often excellent at making predictions, and may also be good for explanation. But the mental models of science may not be “objectively true” in representing reality.

The best mental models in science satisfy several criteria: they reliably predict natural phenomena; they cover a wide range of such phenomena (i.e., they cover much more than a handful of special cases); and they are relatively simple. Now it is not easy to create a mental model that satisfies these criteria, especially because there are tradeoffs between the different criteria. As a result, even the best scientists struggle for many years to create adequate models. But as descriptions of reality, the models, or components of the models, may be fictional or even false. Moreover, although we think that the models we have today are true, every good scientist knows that in the future our current models may be completely overturned by new models based on entirely new conceptions. Yet in many cases, scientists often respect or retain the older models because they are useful, even if the models’ match to reality is false!

Consider the differences between Isaac Newton’s conception of gravity and Albert Einstein’s conception of gravity. According to Newton, gravity is a force that attracts objects to each other. If you throw a ball on earth, the path of the ball eventually curves downward because of the gravitational attraction of the earth. In Newton’s view, planets orbit the sun because the force of gravity pulls planetary bodies away from the straight line paths that they would normally follow as a result of inertia: hence, planets move in circular orbits. But according to Einstein, gravity is not a force — gravity seems like it’s a force, but it’s actually a “fictitious force.” In Einstein’s view, objects seem to attract each other because mass warps or curves spacetime, and objects tend to follow the paths made by curved spacetime. Newton and Einstein agree that inertia causes objects in motion to continue in straight lines unless they are acted on by a force; but in Einstein’s view, planets orbit the sun because they are actually already travelling straight paths, only in curved spacetime! (Yes this makes sense — if you travel in a jet, your straightest possible path between two cities is actually curved, because the earth is round.)

Scientists agree that Einstein’s view of gravity is correct (for now). But they also continue to use Newtonian models all the time. Why? Because Newtonian models are much simpler than Einstein’s and scientists don’t want to work harder than they have to! Using Newtonian conceptions of gravity as a real force, scientists can still track the paths of objects and send satellites into orbit; Newton’s equations work perfectly fine as predictive models in most cases. It is only in extraordinary cases of very high gravity or very high speeds that scientists must abandon Newtonian models and use Einstein’s to get more accurate predictions. Otherwise scientists much prefer to assume gravity is a real force and use Newtonian models. Other fictitious forces that scientists calculate using Newton’s models are the Coriolis force and centrifugal force.

Even in cases where you might expect scientists to use Einstein’s conception of curved spacetime, there is not a consistent practice. Sometimes scientists assume that spacetime is curved, sometimes they assume spacetime is flat. According to theoretical physicist Kip Thorne, “It is extremely useful, in relativity research, to have both paradigms at one’s fingertips. Some problems are solved most easily and quickly using the curved spacetime paradigm; others, using flat spacetime. Black hole problems . . . are most amenable to curved spacetime techniques; gravitational-wave problems . . . are most amenable to flat spacetime techniques.” (Black Holes and Time Warps). Whatever method provides the best results is what matters, not so much whether spacetime is really curved or not.

The question of the reality of mental models in science is particularly acute with regard to mathematical models. For many years, mathematicians have been debating whether or not the objects of mathematics are real, and they have yet to arrive at a consensus. So, if an equation accurately predicts how natural phenomena behave, is it because the equation exists “out there” someplace? Or is it because the equation is just a really good mental model? Einstein himself argued that “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” By this, Einstein meant that it was possible to create perfectly certain mathematical models in the human mind; but that the matching of these models’ predictions to natural phenomenon required repeated observation and testing, and one could never be completely sure that one’s model was the final answer and therefore that it really objectively existed.

And even if mathematical models work perfectly in predicting the behavior of natural phenomena, there remains the question of whether the different components of the model really match to something in reality. As noted above, Newton’s model of gravity does a pretty good job of predicting motion — but the part of the model that describes gravity as a force is simply wrong. In mathematics, the set of numbers known as “imaginary numbers” are used by engineers for calculating electric current; they are used by 3D modelers; and they are used by physicists in quantum mechanics, among other applications. But that doesn’t necessarily mean that imaginary numbers exist or correspond to some real quantity — they are just useful components of an equation.

A great many scientists are quite upfront about the fact that their models may not be an accurate reflection of reality. In their view, the purpose of science is to predict the behavior of natural phenomena, and as long as science gets better and better at this, it is less important if models are proved to be a mismatch to reality. Brian Koberlein, an astrophysicist at the Rochester Institute of Technology, writes that scientific theories should be judged by the quality and quantity of their predictions, and that theories capable of making predictions can’t be proved wrong, only replaced by theories that are better at predicting. For example, he notes that the caloric theory of heat, which posited the existence of an invisible fluid within materials, was quite successful in predicting the behavior of heat in objects, and still is at present. Today, we don’t believe such a fluid exists, but we didn’t discard the theory until we came up with a new theory that could predict better. The caloric theory of heat wasn’t “proven wrong,” just replaced with something better. Koberlein also points to Newton’s conception of gravity, which is still used today because it is simpler than Einstein’s and “good enough” at predicting in most cases. Koberlein concludes that for these reasons, Einstein will “never” be wrong — we just may find a theory better at predicting.

Stephen Hawking has discussed the problem of truly knowing reality, and notes that it perfectly possible to have different theories with entirely different conceptual frameworks that work equally well at predicting the same phenomena. In a fanciful example, Hawking notes that goldfish living in a curved bowl will see straight-line movement outside the bowl as being curved, but despite this it would still be possible for goldfish to develop good predictive theories. He notes that likewise, human beings may also have a distorted picture of reality, but we are still capable of building good predictive models. Hawking calls his philosophy “model-dependent realism”:

According to model-dependent realism, it is pointless to ask whether a model is real, only whether it agrees with observation. If there are two models that both agree with observation, like the goldfish’s model and ours, then one cannot say that one is more real than the other. One can use whichever model is more convenient in the situation under consideration. (The Grand Design, p. 46)

So if science consists of belief systems/mental models, which may contain fictions or falsehoods, how exactly does science differ from religion?

Well for one thing, science far excels religion in providing good predictive models. If you want to know how the universe began, how life evolved on earth, how to launch a satellite into orbit, or how to build a computer, religious texts offer virtually nothing that can help you with these tasks. Neil deGrasse Tyson is absolutely correct about the failure of religion in this respect.  Traditional stories of the earth’s creations, as found in the Bible’s book of Genesis, were useful first attempts to understand our origins, but they have been long-eclipsed by contemporary scientific models, and there is no use denying this.

What religion does offer, and science does not, is a transcendent picture of how we ought to live our lives and an interpretation of life’s meaning according to this transcendent picture. The behavior of natural phenomena can be predicted to some extent by science, but human beings are free-willed. We can decide to love others or love ourselves above others. We can seek peace, or murder in the pursuit of power and profit. Whatever we decide to do, science can assist us in our actions, but it can’t provide guidance on what we ought to do. Religion provides that vision, and if these visions are imaginative, so are many aspects of scientific models. Einstein himself, while insisting that science was the pursuit of objective knowledge, also saw a role for religion in providing a transcendent vision:

[T]he scientific method can teach us nothing else beyond how facts are related to, and conditioned by, each other.The aspiration toward such objective knowledge belongs to the highest of which man is capabIe, and you will certainly not suspect me of wishing to belittle the achievements and the heroic efforts of man in this sphere. Yet it is equally clear that knowledge of what is does not open the door directly to what should be. . . . Objective knowledge provides us with powerful instruments for the achievements of certain ends, but the ultimate goal itself and the longing to reach it must come from another source. . . .

To make clear these fundamental ends and valuations, and to set them fast in the emotional life of the individual, seems to me precisely the most important function which religion has to perform in the social life of man.

Now fundamentalists and atheists might both agree that rejecting the truth of sacred scripture with regard to the big bang and evolution tends to undermine the transcendent visions of religion. But the fact of the matter is that scientists never reject a mental model simply because parts of the model may be fictional or false; if the model provides useful guidance, it is still a valid part of human knowledge.

Does the Flying Spaghetti Monster Exist?

In a previous post, Belief and Evidence, I addressed the argument made by many atheists that those who believe in God have the burden of proof. In this view, evidence must accompany belief, and belief in anything for which there is insufficient evidence is irrational. One popular example cited by proponents of this view is the satirical creation known as the “Flying Spaghetti Monster.” Proposed as a response to the demands by creationists for equal time in the classroom with evolutionary theory, the Flying Spaghetti Monster has been cited as an example of an absurd entity which no one has the right to believe in unless one has actual evidence. According to famous atheist Richard Dawkins, disbelievers are not required to submit evidence against the existence of either God or the Flying Spaghetti Monster, it is believers that have the burden of proof.

The problem with this philosophy is that it would seem to apply equally well to many physicists’ theories of the “multiverse,” and in fact many scientists have criticized multiverse theories on the grounds that there is no way to observe or test for other universes. The most extreme multiverse theories propose that every mathematically possible universe, each with its own slight variation on physical laws and constants, exists somewhere. Multiverse theory has even led to bizarre speculations about hypothetical entities such as “Boltzmann brains.” According to some scientists, it is statistically more likely for the random fluctuations of matter to create a free-floating brain than it is for billions of years of universal evolution to lead to brains in human bodies. (You may have heard of the claim that a million monkeys typing on a million typewriters will eventually produce the works of Shakespeare — the principle is similar.) This means that reincarnation could be possible or that we are actually Boltzmann brains that were randomly generated by matter and that we merely have the illusion that we have bodies and an actual past. According to physicist Leonard Susskind, “It is part of a much bigger set of questions about how to think about probabilities in an infinite universe in which everything that can occur, does occur, infinitely many times.”

If you think it is odd that respected scientists are actually discussing the possibility of floating brains spontaneously forming, well this is just one of the strange predictions that current multiverse theories tend to create. When one proposes the existence of an infinite number of possible universes based on an infinite variety of laws and constants, then anything is possible in some universe somewhere.

So is there a universe somewhere in which the laws of matter and energy are fine-tuned to support the existence of Flying Spaghetti Monsters? This would seem to be the logical outcome of the most extreme multiverse theories. I have hesitated to bring this argument up until now, because I am not a theoretical physicist and I do not understand the mathematics behind multiverse theory. However, I recently came across an article by Marcelo Gleiser, a physicist at Dartmouth College, who sarcastically asks “Do Fairies Live in the Multiverse? Discussing multiverse theories, Gleiser writes:

This brings me to the possible existence of fairies in the multiverse. The multiverse, a popular concept in modern theoretical physics, is an extension of the usual idea of the universe to encompass many possible variations. Under this view, our universe, the sum total of what’s within our “cosmic horizon” of 46 billion light years, would be one among many others. In many theories, different universes could have radically different properties, for example, electrons and protons with different masses and charges, or no electrons at all.

As in Jorge Luis Borges’ Library of Babel, which collected all possible books, the multiverse represents all that could be real if we tweaked the alphabet of nature, combining it in as many combinations as possible.

If by fairies we mean little, fabulous entities capable of flight and of magical deeds that defy what we consider reasonable in this world, then, yes, by all means, there could be fairies somewhere in the multiverse.

So, here we have a respected physicist arguing that the logical implication of existing multiverse theories, in which every possibility exists somewhere, is that fairies may well exist. Of course, Gleiser is not actually arguing that fairies exist — he is pointing out what happens when certain scientific theories propose infinite possibilities without actually being testable.

But if multiverse theories are correct, maybe the Flying Spaghetti Monster does exist out there somewhere.

The Mythos of Mathematics

‘Modern man has his ghosts and spirits too, you know.’

‘What?’

‘Oh, the laws of physics and of logic . . . the number system . . . the principle of algebraic substitution. These are ghosts. We just believe in them so thoroughly they seem real.’

Robert Pirsig, Zen and the Art of Motorcycle Maintenance

 

It is a popular position among physicists that mathematics is what ultimately lies behind the universe. When asked for an explanation for the universe, they point to numbers and equations, and furthermore claim that these numbers and equations are the ultimate reality, existing objectively outside the human mind. This view is known as mathematical Platonism, after the Greek philosopher Plato, who argued that the ultimate reality consisted of perfect forms.

The problem we run into with mathematical Platonism is that it is subject to some of the same skepticism that people have about the existence of God, or the gods. How do we know that mathematics exists objectively? We can’t sense mathematics directly; we only know that it is a useful tool for dealing with reality. The fact that math is useful does not prove that it exists independently of human minds. (For an example of this skepticism, see this short video).

Scholars George Lakoff and Rafael Nunez, in their book Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being, offer the provocative and fascinating thesis that mathematics consists of metaphors. That is, the abstractions of mathematics are ultimately grounded in conceptual comparisons to concrete human experiences. In the view of Lakoff and Nunez, all human ideas are shaped by our bodily experiences, our senses, and how these senses react to our environment. We try to make sense of events and things by comparing them to our concrete experiences. For example, we conceptualize time as a limited resource (“time is money”); we conceptualize status or mood in terms of space (happy is “up,” while sad is “down”); we personify events and things (“inflation is eating up profits,” “we must declare war on poverty”). Metaphors are so prevalent and taken for granted, that most of the time we don’t even notice them.

Mathematical systems, according to Lakoff and Nunez, are also metaphorical creations of the human mind. Since human beings have common experiences with space, time, and quantities, our mathematical systems are similar. But we do have a choice in the metaphors we use, and that is where the creative aspect of mathematics comes in. In other words, mathematics is grounded in common experiences, but mathematical conceptual systems are creations of the imagination. According to Lakoff and Nunez, confusion and paradoxes arise when we take mathematics literally and don’t recognize the metaphors behind mathematics.

Lakoff and Nunez point to a number of common human activities that subsequently led to the creation of mathematical abstractions. The collection of objects led to the creation of the “counting numbers,” otherwise known as “natural numbers.” The use of containers led to the notion of sets and set theory. The use of measuring tools (such as the ruler or yard stick) led to the creation of the “number line.” The number line in turn was extended to a plane with x and y coordinates (the “Cartesian plane“). Finally, in order to understand motion, mathematicians conceptualized time as space, plotting points in time as if they were points in space — time is not literally the same as space, but it is easier for human beings to measure time if it is plotted on a spatial graph.

Throughout history, while the counting numbers have been widely accepted, there have been controversies over the creation of other types of numbers. One of the reasons for these controversies is the mistaken belief that numbers must be objectively real rather than metaphorical. So the number zero was initially controversial because it made no sense to speak literally of having a collection of zero objects. Negative numbers were even more controversial because it’s impossible to literally have a negative number of objects. But as the usefulness of zero and negative numbers as metaphorical expressions and in performing calculations became clear, these numbers became accepted as “real” numbers.

The metaphor of the measuring stick/number line, according to Lakoff and Nunez, has been responsible for even more controversy and confusion. The basic problem is that a line is a continuous object, not a collection of objects. If one makes an imaginative metaphorical leap and envisions the line as a collection of objects known as segments or points, that is very useful for measuring the line, but a line is not literally a collection of segments or points that correspond to objectively existing numbers.

If you draw three points on a piece of paper, the sum of the collection of points clearly corresponds to the number three, and only the number three. But if you draw a line on a piece of paper, how many numbers does it have? Where do those numbers go? The answer is up to you, depending on what you hope to measure and how much precision you want. The only requirement is that the numbers are in order and the length of the segments is consistently defined. You can put zero on the left side of the line, the right side of the line, or in the middle. You can use negative numbers or not use negative numbers. The length of the segments can be whatever you want, as long as the definitions of segment length are consistent.

The number line is a great mental tool, but it does not objectively exist, outside of the human mind. Neglecting this fact has led to paradoxes that confounded the ancient Greeks and continue to mystify human beings to this day. The first major problem arose when the Greeks attempted to determine the ratio of the sides of a particular polygon and discovered that the ratio could not be expressed as a ratio of whole numbers, but rather as an infinite, nonrepeating decimal. For example, a right triangle with two shorter sides of length 1 would, according to the Pythagorean theorem, have a hypotenuse length equivalent to the square root of 2, which is an infinite decimal: 1.41421356. . .  This scandalized the ancient Greeks at first, because many of them had a religious devotion to the idea that whole numbers existed objectively and were the ultimate basis of reality. Nevertheless, over time the Greeks eventually accepted the so-called “irrational numbers.”

Perhaps the most famous irrational number is pi, the measure of the ratio between the circumference of a circle and its diameter: 3.14159265. . . The fact that pi is an infinite decimal fascinates people to no end, and scientists have calculated the value of pi to over 13 trillion digits. But the digital representation of pi has no objective existence — it is simply a creation of the human imagination based on the metaphor of the measuring stick / number line. There’s no reason to be surprised or amazed that the ratio of the circumference of a circle to its diameter is an infinite decimal; lines are continuous objects, and expressing lines as being composed of discrete objects known as segments is bound to lead to difficulties eventually. Moreover, pi is not necessary for the existence of circles. Even children are perfectly capable of drawing circles without knowing the value of pi. If children can draw circles without knowing the value of pi, why should the universe need to know the value of pi? Pi is simply a mental tool that human beings created to understand the ratio of certain line lengths by imposing a conceptual framework of discrete segments on a continuous quantity. Benjamin Lee Buckley, in his book The Continuity Debate, underscores this point, noting that one can use discrete tools for measuring continuity, but that truly continuous quantities are not really composed of discrete objects.

It is true that mathematicians have designated pi and other irrational numbers as “real” numbers, but the reality of the existence of pi outside the human mind is doubtful. An infinitely precise pi implies infinitely precise measurement, but there are limits to how precise one can be in reality, even assuming absolutely perfect measuring instruments. Although pi has been calculated to over 13 trillion digits, it is estimated that only 39 digits are needed to calculate the volume of the known universe to the precision of one atom! Furthermore, the Planck length is the smallest measurable length in the universe. Although quite small, the Planck length sets a definite limit on how precise pi can be in reality. At some point, depending on the size of the circle one creates, the extra digits in pi are simply meaningless.

Undoubtedly, the number line is an excellent mental tool. If we had perfect vision, perfect memory, and perfect eye-hand coordination, we wouldn’t need to divide lines into segments and count how many segments there are. But our vision is imperfect, our memories fallible, and our eye-hand coordination is imperfect. That is why we need to use versions of the number line to measure things. But we need to recognize that we are creating and imposing a conceptual tool on reality. This tool is metaphorical and, while originating in human experience, it is not reality itself.

Lakoff and Nunez point to other examples of metaphorical expressions in mathematics, such as the concept of infinity. Mathematicians discuss the infinitely large, the infinitely small, and functions in calculus that come infinitely close to some designated limit. But Lakoff and Nunez point out that the notion of actual (literal) infinity, as opposed to potential infinity, has been extremely problematic, because calculating or counting infinity is inherently an endless process. Lakoff and Nunez argue that envisioning infinity as a thing, or the result of a completed process, is inherently metaphorical, not literal. If you’ve ever heard children use the phrase “infinity plus one!” in their taunts, you can see some of the difficulties with envisioning infinity as a thing, because one can simply take the allegedly completed process and start it again. Oddly, even professional mathematicians don’t agree on the question of whether “infinity plus one” is a meaningful statement. Traditional mathematics says that infinity plus one is still infinity, but there are more recent number systems in which infinity plus one is meaningful. (For a discussion of how different systems of mathematics arrive at different answers to the same question, see this post.)

Nevertheless, many mathematicians and physicists fervently reject the idea that mathematics comes from the human mind. If mathematics is useful for explaining and predicting real world events, they argue, then mathematics must exist in objective reality, independent of human minds. But why is it important for mathematics to exist objectively? Isn’t it enough that mathematics is a useful mental tool for describing reality? Besides, if all the mathematicians in the world stopped all their current work and devoted themselves entirely to proving the objective existence of mathematical objects, I doubt that they would succeed, and mathematical knowledge would simply stop progressing.

Scientific Evidence for the Reality of Mysticism

What exactly is mysticism? One of the problems with defining and evaluating mysticism is that mystical experiences seem to be inherently personal and unobservable to outsiders. Certainly, one can observe a person meditate and then record what that person says about his or her experience while meditating, but what is real about that experience? Persons undergoing such experiences often describe a feeling of oneness with the universe, love for all creation, and communion with the divine. But anyone can dream or daydream or imagine. It would be one thing if mystics came up with great ideas during their meditative states — a cure for a disease, a viable plan for peace between warring states, or a better political system. But this generally does not happen. Mystical experiences remain personal.

Recently, however, brain scan technologies, along with knowledge of the different functional areas of the human brain, have allowed scientists for the first time to actually observe what is going on in the brains of people who are undergoing mystical experiences. And the findings are remarkable.

Studies of persons engaged in prayer or meditation indicate that the frontal lobes of participants’ brains, responsible for concentration, light up during meditation– an unsurprising conclusion. However, at the same time, the parietal lobes of the same brains go dark — these sections of the brain are responsible for an individual’s sense of self, and help a person orient him or herself to the world. So when a person claims that they experience a oneness with the universe, that appears to be exactly what is going on — the person’s sense of self is actually put to sleep. And when the sense of self disappears, so does egocentrism. Researchers have found that people who regularly engage in meditation literally reshape their brains, becoming both more attentive and compassionate. The particular religion they belong to did not matter — Buddhist, Christian, Sikh — all seemed to experience the same changes in the brain.

Research on psychedelic drugs has found evidence that psychedelics act as potent enablers of mystical experience. Psilocybin, the active chemical in “magic mushrooms,” and mescaline, from the cactus known as peyote, are psychedelics that have been used for thousands of years in Native American religious ceremonies. LSD was synthesized in 1938 from a fungus, ergot, that may have played a role as a hallucinogen in ancient Greek religions. What these chemicals have in common is that they all seem to have effects on the brain similar to what may be experienced during deep meditation: they dissolve the sense of self and enable extraordinary visions that appear to give people a radically new perspective on their lives. One research subject described her experience on psilocybin as follows: “I know that I had a merging with what I call oneness. . . . There was a time that I was being gently pulled into it, and I saw it as light. . . . It isn’t even describable. It’s not just light; it’s love.” In fact, two-thirds of study participants who received psilocybin ranked their psychedelic experience as being among the top five most spiritually significant experiences of their lives, comparable to the birth of a child or the death of a parent.

Psilocybin has even been used to treat addiction — the mystical experience seems to reboot the brain, allowing people to break old, ingrained habits. A study of fifteen smokers who had failed multiple treatments for addiction found that after therapy sessions with psilocybin, 80 percent were able to quit cigarettes for at least 6 months, an unprecedented success rate. Smokers who seemed to have a more complete mystical experience had the greatest success quitting. According to one subject, “smoking seemed irrelevant, so I stopped.” Cancer patients who underwent treatment with psilocybin had a reduction in anxiety and distress.

Brain scans of those undergoing mystical experiences under psychedelics indicate reduced activity in the part of the brain known as the “default-mode network.” This high-level part of the brain acts as something of a corporate executive for the brain. It inhibits lower-level brain functions such as emotion and memory and also creates a sense of self, enabling persons to distinguish themselves from other people and from the rest of the world. When psychedelics suppress the default-mode network, the lower brain regions are unleashed, leading to visions that may be bizarre but, in many cases, insightful. And the sense of self disappears, as one feels a merging with the rest of the world.

It’s important not to overstate the findings of these scientific studies, by citing spiritual experiences as justification for theism. These studies do not prove that God exists or that there is a supernatural dimension. The visions that people experience while under psychedelics are often chaotic and meaningless. But this sort of radical free association does seem to help people attain new perspectives and enhance their openness to new ideas. And the feeling of oneness with the universe, the dissolution of the self, is not just an unconfirmed claim — it really does seem to be supported by brain scan studies. So the mystical experience is not superstitious pre-scientific thinking, but a valid mode of thought, one which many of us, including myself, have dismissed without even trying.

Scientific Evidence for the Benefits of Faith

Increasingly, scientific studies have recognized the power of positive expectations in the treatment of people who are suffering from various illnesses. The so-called “placebo” effect is so powerful that studies generally try to control for it: fake pills, fake injections, or sometimes even fake surgeries will be given to one group while another group is offered the “real” treatment. If the real drug or surgery is no better than the fake drug/surgery, then the treatment is considered a failure. What has not been recognized until relatively recently is how the power of positive expectations should be considered as a form of treatment in itself.

Recently, Harvard University has established a Program in Placebo Studies and the Therapeutic Encounter in order to study this very issue. For many scientists, the power of the placebo has been a scandal and an embarrassment, and the idea of offering a “fake” treatment to a patient seems to go against every ethical and professional principle. But the attitude of Ted Kaptchuk, head of the Harvard program, is that if something works, it’s worth studying, no matter how crazy and irrational it seems.

In fact, “crazy” and “irrational” seem to be apt words to describe the results of research on placebos. Researchers have found differences in the effectiveness of placebos based merely on appearance — large pills are more effective than small pills; two pills are better than one pill; “brand name” pills are more effective than generics; capsules are better than pills; and injections are the most effective of all! Even the color of pills affects the outcome. One study found that the most famous anti-anxiety medication in the world, Valium, has no measurable effect on a person’s anxiety unless the person knows he or she is taking it (see “The Power of Nothing” in the Dec. 12 2011 New Yorker). The placebo is probably the oldest and simplest form of “faith healing” there is.

There are scientists who are critical of many of these placebo studies; they believe the power of placebos has been greatly exaggerated. Several studies have concluded that the placebo effect is small or insignificant, especially when objective measures of patient improvement are used instead of subjective self-reports.

However, it should be noted that the placebo effect is not simply a matter of patient feelings that are impossible to measure accurately — there is actually scientific evidence that the human brain manufactures chemicals in response to positive expectations. In the 1970s, it was discovered that people who reported a reduction in pain in response to a placebo were actually producing greater amounts of endorphins, a substance in the brain chemically similar to morphine and heroin that reduces pain and is capable of producing feelings of euphoria (as in the “runner’s high“). Increasingly, studies of the placebo effect have relied on brain scans to actually track changes in the brain in response to a patient receiving a placebo, so measurement of effects is not merely a matter of relying on what a person says. One recent study found that patients suffering from Parkinson’s disease responded better to an “expensive” placebo than a “cheaper” placebo. Patients were given injections containing nothing but saline water, but the arm of patients that was told the saline solution cost $1500 per dose experienced significantly better improvements in motor function than patients that were given a “cheaper” placebo! This happens because the placebo effect boosts the brain’s production of dopamine, which counteracts the effects of Parkinson’s disease. Brain scans have confirmed greater dopamine activation in the brains of those given placebos.

Other studies have confirmed the close relation between the health of the human mind and the health of the body. Excessive stress weakens the immune system, creating an opening for illness. People who regularly practice meditation, on the other hand, can strengthen their immune system and as result, catch colds and the flu less often. The health effects of mediation do not depend on the religion of those practicing it — Buddhist, Christian, Sikh. The mere act of meditation is what it important.

Why has modern medicine been so slow and reluctant to acknowledge the power of positive expectations and spirituality in improving human health? I think it’s because modern science has been based on certain metaphysical assumptions about nature which have been very valuable in advancing knowledge historically, but are ultimately limited and flawed. These assumptions are: (1) Anything that exists solely in the human mind is not real; (2) Knowledge must be based on what exists objectively, that is, what exists outside the mind; and (3) everything in nature is based on material causation — impersonal objects colliding with or forming bonds with other impersonal objects. In many respects, these metaphysical assumptions were valuable in overcoming centuries of wrong beliefs and superstitions. Scientists learned to observe nature in a disinterested fashion, discover how nature actually was and not how we wanted it to be. Old myths about gods and personal spirits shaping nature became obsolete, to be replaced by theories of material causation, which led to technological advances that brought the human race enormous benefits.

The problem with these metaphysical assumptions, however, is that they draw too sharp a separation between the human mind and what exists outside the mind. The human mind is part of reality, embedded in reality. Scientists rely on concepts created by the human mind to understand reality, and multiple, contradictory concepts and theories may be needed to understand reality.  (See here and here). And the human mind can modify reality – it is not just a passive spectator. The mind affects the body directly because it is directly connected to the body. But the mind can also affect reality by directing the limbs to perform certain tasks — construct a house, create a computer, or build a spaceship.

So if the human mind can shape the reality of the body through positive expectations, can positive expectations bring additional benefits, beyond health? According to the American philosopher William James in his essay “The Will to Believe,” a leap of faith could be justified in certain restricted circumstances: when a momentous decision must be made, there is a large element of uncertainty, and there are not enough resources and time to reduce the uncertainty. (See this post.) In James’ view, in some cases, we must take the risk of supposing something is true, lest we lose the opportunity of gaining something beneficial. In short, “Faith in a fact can help create that fact.”

Scientific research on how expectations affect human performance tends to support James’ claim. Performance in sports is often influenced by athletes’ expectations of “good luck.” People who are optimistic and visualize their ideal goals are more likely to actually attain their goals than people who don’t. One recent study found that human performance in a color discrimination task is better when the subjects are provided a lamp that has a label touting environmental friendliness. Telling people about stereotypes before crucial tests affects how well people perform on tests — Asians who are told about how good Asians are at math perform better on math tests; women who are sent the message that women are not as smart perform less well on tests. When golfers are told that winning golf is a matter of intelligence, white golfers improve their performance; when golfers are told that golf is a matter of natural athleticism, blacks do better.

Now, I am not about to tell you that faith is good in all circumstances and that you should always have faith. Applied across the board, faith can hurt you or even kill you. Relying solely on faith is not likely to cure cancer or other serious illnesses. Worshipers in some Pentecostal churches who handle poisonous snakes sometimes die from snake bites. And terrorists who think they will be rewarded in the afterlife for killing innocent people are truly deluded.

So what is the proper scope for faith? When should it be used and when should it not be used? Here are three rules:

First, faith must be restricted to the zone of uncertainty that always exists when evaluating facts. One can have faith in things that are unknown or not fully known, but one should not have faith in things that are contrary to facts that have been well-established by empirical research. One cannot simply say that one’s faith forbids belief in the scientific findings on evolution and the big bang, or that faith requires that one’s holy text is infallible in all matters of history, morals, and science.

Second, the benefits of faith cannot be used as evidence for belief in certain facts. A person who finds relief from Parkinson’s disease by imagining the healing powers of Christ’s love cannot argue that this proves that Jesus was truly the son of God, that Jesus could perform miracles, was crucified, and rose from the dead. These are factual claims that may or may not be historically accurate. Likewise with the golden plates of Joseph Smith that were allegedly the basis for the Book of Mormon or the ascent of the prophet Muhammad to heaven — faith does not prove any of these alleged facts. If there was evidence that one particular religious belief tended to heal people much better than other religious beliefs, then one might devote effort to examining if the facts of that religion were true. But there does not seem to be a difference among faiths — just about any faith, even the simplest faith in a mere sugar pill, seems to work.

Finally, faith should not run unnecessary risks. Faith is a supplement to reason, research, and science, not an alternative. Science, including medical science, works. If you get sick, you should go to a doctor first, then rely on faith. As the prophet Muhammad said, “Tie your camel first, then put your trust in Allah.”

Scientific Revolutions and Relativism

Recently, Facebook CEO Mark Zuckerberg chose Thomas Kuhn’s classic The Structure of Scientific Revolutions for his book discussion group. And although I don’t usually try to update this blog with the most recent controversy of the day, this time I can’t resist jumping on the Internet bandwagon and delving into this difficult, challenging book.

To briefly summarize, Kuhn disputes the traditional notion of science as one of cumulative growth, in which Galileo and Kepler build upon Copernicus, Newton builds upon Galileo and Kepler, and Einstein builds upon Newton. This picture of cumulative growth may be accurate for periods of “normal science,” Kuhn writes, when the community of scientists are working from the same general picture of the universe. But there are periods when the common picture of the universe (which Kuhn refers to as a “paradigm”) undergoes a revolutionary change. A radically new picture of the universe emerges in the community of scientists, old words and concepts obtain new meanings, and scientific consensus is challenged by conflict between traditionalists and adherents of the new paradigm. If the new paradigm is generally successful in solving new puzzles AND solving older puzzles that the previous paradigm solved, the community of scientists gradually moves to accept the new paradigm — though this often requires that stubborn traditionalists eventually die off.

According to Kuhn, science as a whole progressed cumulatively in the sense that science became better and better at solving puzzles and predicting things, such as the motions of the planets and stars. But the notion that scientific progress was bringing us closer and closer to the Truth, was in Kuhn’s view highly problematic. He felt there was no theory-independent way of saying what was really “out there” — conceptions of reality were inextricably linked to the human mind and its methods of perceiving, selecting, and organizing information. Rather than seeing science as evolving closer and closer to an ultimate goal, Kuhn made an analogy to biological evolution, noting that life evolves into higher forms, but there is no evidence of a final goal toward which life is heading. According to Kuhn,

I do not doubt, for example, that Newton’s mechanics improves on Aristotle’s and that Einstein’s improves on Newton’s as instruments for puzzle-solving. But I can see in their succession no coherent direction of ontological development. On the contrary, in some important respects, though by no means all, Einstein’s general theory of relativity is closer to Aristotle’s than either of them is to Newton’s. (Structure of Scientific Revolutions, postscript, pp. 206-7.)

This claim has bothered many. In the view of Kuhn’s critics, if a theory solves more puzzles, predicts more phenomena to a greater degree of accuracy, the theory must be a more accurate picture of reality, bringing us closer and closer to the Truth. This is a “common sense” conclusion that would seem to be irrefutable. One writer in Scientific American comments on Kuhn’s appeal to “relativists,” and argues:

Kuhn’s insight forced him to take the untenable position that because all scientific theories fall short of absolute, mystical truth, they are all equally untrue. Because we cannot discover The Answer, we cannot find any answers. His mysticism led him to a position as absurd as that of the literary sophists who argue that all texts — from The Tempest to an ad for a new brand of vodka — are equally meaningless, or meaningful. (“What Thomas Kuhn Really Thought About Scientific ‘Truth’“)

Many others have also charged Kuhn with relativism, so it is important to take some time to examine this charge.

What people seem to have a hard time grasping is what scientific theories actually accomplish. Scientific theories or models can in fact be very good at solving puzzles or predicting outcomes without being an accurate reflection of reality — in fact, in many cases theories have to be unrealistic in order to be useful! Why? A theory must accomplish several goals, but some of these goals are incompatible, requiring a tradeoff of values. For example, the best theories generalize as much as possible, but since there are exceptions to almost every generalization, there is a tradeoff between generalizability and accuracy. As Nancy Cartwright and Ronald Giere have pointed out, the “laws of physics” have many exceptions when matched to actual phenomena; but we cherish the laws of physics because of their wide scope: they subsume millions of observations under a small number of general principles, even though specific cases usually don’t exactly match the predictions of any one law.

There is also a tradeoff between accuracy and simplicity. Complete accuracy in many cases may require dozens of complex calculations; but most of the time, complete accuracy is not required, so scientists go with the simplest possible principles and calculations. For example, when dealing with gravity, Newton’s theory is much simpler than Einstein’s, so scientists use Newton’s equations until circumstances require them to use Einstein’s equations. (For more on theoretical flexibility, see this post.)

Finally, there is a tradeoff between explanation and prediction. Many people assume that explanation and prediction are two sides of the same coin, but in fact it is not only possible to predict outcomes without having a good causal model, sometimes focusing on causation gets in the way of developing a good predictive model. Why? Sometimes it’s difficult to observe or measure causal variables, so you build your model using variables that are observable and measurable even if those variables are merely associated with certain outcomes and may not cause those outcomes. To choose a very simple example, a model that posits that a rooster crowing leads to the rising of the sun can be a very good predictive model while saying nothing about causation. And there are actually many examples of this in contemporary scientific practice. Scientists working for the Netflix corporation on improving the prediction of customers’ movie preferences have built a highly valuable predictive model using associations between certain data points, even though they don’t have a true causal model. (See Galit Shmueli, “To Explain or Predict” in Statistical Science, 2010, vol. 25, no. 3)

Not only is there no single, correct way to make these value tradeoffs, it is often the case that one can end up with multiple, incompatible theories that deal with the same phenomena, and there is no obvious choice as to which theory is best. As Kuhn has pointed out, new theories become widely accepted among the community of scientists only when the new theory can account for anomalies in the old theory AND yet also conserve at least most of the predictions of the old theory. Even so, it is not long before even newer theories come along that also seem to account for the same phenomena equally well. Is it relativism to recognize this fact? Not really. Does the reality of multiple, incompatible theories mean that every person’s opinion is equally valid? No. There are still firm standards in science. But there can be more than one answer to a problem. The square root of 1,000,000 can be 1000 or -1000. That doesn’t mean that any answer to the square root of 1,000,000 is valid!

Physicist Stephen Hawking and philosopher Ronald Giere have made the analogy between scientific theories and maps. A map is an attempt to reduce a very large, approximately spherical, three dimensional object — the earth — to a flat surface. There is no single correct way to make a map, and all maps involve some level of inaccuracy and distortion. If you want accurate distances, the areas of the land masses will be inaccurate, and vice versa. With a small scale, you can depict large areas but lose detail. If you want to depict great detail, you will have to make a map with a larger scale. If you want to depict all geographic features, your map may become so cluttered with detail it is not useful, so you have to choose which details are important — roads, rivers, trees, buildings, elevation, agricultural areas, etc. North can be “up” on your map, but it does not have to be. In fact, it’s possible to make an infinite number of valid maps, as long as they are useful for some purpose. That does not mean that anyone can make a good map, that there are no standards. Making good maps requires knowledge and great skill.

As I noted above, physicists tend to prefer Newton’s theory of gravity rather than Einstein’s to predict the motion of celestial objects because it is simpler. There’s nothing wrong with this, but it is worth pointing out that Einstein’s picture of gravity is completely different from Newton’s. In Newton’s view, space and time are separate, absolute entities, space is flat, and gravity is a force that pulls objects away from the straight lines that the law of inertia would normally make them follow. In Einstein’s view, space and time are combined into one entity, spacetime, space and time are relative, not absolute, spacetime is curved in the presence of mass, and when objects orbit a planet it is not because the force of gravity is overcoming inertia (gravity is in fact a “fictitious force“), but because objects are obeying the law of inertia by following the curved paths of spacetime! In terms of prediction, Einstein’s view of gravity offers an incremental improvement to Newton’s, but Einstein’s picture of gravity is so radically different, Kuhn was right in seeing Einstein’s theory as a revolution. But scientists continue to use Newton’s theory, because it mostly retains the value of prediction while excelling in the value of simplicity.

Stephen Hawking explains why science is not likely to progress to a single, “correct” picture of the universe:

[O]our brains interpret the input from our sensory organs by making a model of the world. When such a model is successful at explaining events, we tend to attribute to it, and the elements and concepts that constitute it, the quality of reality or absolute truth. But there may be different ways in which one could model the same physical situation, with each employing different fundamental elements and concepts. If two such physical theories or models accurately predict the same events, one cannot be said to be more real than the other; rather we are free to use whichever model is more convenient.  (The Grand Design, p. 7)

I don’t think this is “relativism,” but if people insist that it is relativism, it’s not Kuhn who is the guilty party. Kuhn is simply exposing what scientists do.