What Does Science Explain? Part 3 – The Mythos of Objectivity

In parts one and two of my series “What Does Science Explain?,” I contrasted the metaphysics of the medieval world with the metaphysics of modern science. The metaphysics of modern science, developed by Kepler, Galileo, Descartes, and Newton, asserted that the only true reality was mathematics and the shape, motion, and solidity of objects, all else being subjective sensations existing solely within the human mind. I pointed out that the new scientific view was valuable in developing excellent predictive models, but that scientists made a mistake in elevating a method into a metaphysics, and that the limitations of the metaphysics of modern science called for a rethinking of the modern scientific worldview. (See The Metaphysical Foundations of Modern Science by Edwin Arthur Burtt.)

Early scientists rejected the medieval worldview that saw human beings as the center and summit of creation, and this rejection was correct with regard to astronomical observations of the position and movement of the earth. But the complete rejection of medieval metaphysics with regard to the role of humanity in the universe led to a strange division between theory and practice in science that endures to this day. The value and prestige of science rests in good part on its technological achievements in improving human life. But technology has a two-sided nature, a destructive side as well as a creative side. Aspects of this destructive side include automatic weaponry, missiles, conventional explosives, nuclear weapons, biological weapons, dangerous methods of climate engineering, perhaps even a threat from artificial intelligence. Even granting the necessity of the tools of violence for deterrence and self-defense, there remains the question of whether this destructive technology is going too far and slipping out of our control. So far the benefits of good technology have outweighed the hazards of destructive technology, but what research guidance is offered to scientists when human beings are removed from their high place in the universe and human values are separated from the “real” world of impersonal objects?

Consider the following question: Why do medical scientists focus their research on the treatment and cure of illness in humans rather than the treatment and cure of illness in cockroaches or lizards? This may seem like a silly question, but there’s no purely objective, scientific reason to prefer one course of research over another; the metaphysics of modern science has already disregarded the medieval view that humans have a privileged status in the universe. One could respond by arguing that human beings have a common self-interest in advancing human health through medical research, and this self-interest is enough. But what is the scientific justification for the pursuit of self-interest, which is not objective anyway? Without a recognition of the superior value of human life, medical science has no research guidance.

Or consider this: right now, astronomers are developing and employing advanced technologies to detect other worlds in the galaxy that may have life. The question of life on other planets has long interested astronomers, but it was impossible with older technologies to adequately search for life. It would be safe to say that the discovery of life on another planet would be a landmark development in science, and the discovery of intelligent life on another planet would be an astonishing development. The first scientist who discovered a world with intelligent life would surely win awards and fame. And yet, we already have intelligent life on earth and the metaphysics of modern science devalues it. In practice, of course, most scientists do value human life; the point is, the metaphysics behind science doesn’t, leaving scientists at a loss for providing an intellectual justification for a research program that protects and advances human life.

A second limitation of modern science’s metaphysics, closely related to the first, is its disregard of certain human sensations in acquiring knowledge. Early scientists promoted the view that only the “primary qualities” of mathematics, shape, size, and motion were real, while the “secondary qualities” of color, taste, smell, and sound existed only in the mind. This distinction between primary and secondary qualities was criticized at the time by philosophers such as George Berkeley, a bishop of the Anglican Church. Berkeley argued that the distinction between primary and secondary qualities was false and that even size, shape, and motion were relative to the perceptions and judgment of observers. Berkeley also opposed Isaac Newton’s theory that space and time were absolute entities, arguing instead that these were ideas rooted in human sensations. But Berkeley was disregarded by scientists, largely because Newton offered predictive models of great value.

Three hundred years later, Isaac Newton’s models retain their great value and are still widely used — but it is worth noting that Berkeley’s metaphysics has actually proved superior in many respects to Newton’s metaphysics.

Consider the nature of mathematics. For many centuries mathematicians believed that mathematical objects were objectively real and certain and that Euclidean geometry was the one true geometry. However, the discovery of non-Euclidean geometries in the nineteenth century shook this assumption, and mathematicians had to reconcile themselves to the fact that it was possible to create multiple geometries of equal validity. There were differences between the geometries in terms of their simplicity and their ability to solve particular problems, but no one geometry was more “real” than the others.

If you think about it, this should not be surprising. The basic objects of geometry — points, lines, and planes — aren’t floating around in space waiting for you to take note of them. They are concepts, creations of the human brain. We may see particular objects that resemble points, lines, and planes, but space itself has no visible content; we have to add content to it.  And we have a choice in what content to use. It is possible to create a geometry in which all lines are straight or all lines are curved; in which some lines are parallel or no lines are parallel;  or in which lines are parallel over a finite distance but eventually meet at some infinitely great distance. It is also possible to create a geometry with axioms that assume no lines, only points; or a geometry that assumes “regions” rather than points. So the notion that mathematics is a “primary quality” that exists within objects independent of human minds is a myth. (For more on the imaginary qualities of mathematics, see my previous posts here and here.)

But aside from the discovery of multiple mathematical systems, what has really killed the artificial distinction between “primary qualities,” allegedly objective, and “secondary qualities,” allegedly subjective, is modern science itself, particularly in the findings of relativity theory and quantum mechanics.

According to relativity theory, there is no single, objectively real size, shape, or motion of objects — these qualities are all relative to an observer in a particular reference frame (say, at the same location on earth, in the same vehicle, or in the same rocket ship). Contrary to some excessive and simplistic views, relativity theory does NOT mean that any and all opinions are equally valid. In fact, all observers within the same reference frame should be seeing the same thing and their measurements should match. But observers in different reference frames may have radically different measurements of the size, shape, and motion of an object, and there is no one single reference frame that is privileged — they are all equally valid.

Consider the question of motion. How fast are you moving right now? Relative to your computer or chair, you are probably still. But the earth is rotating at 1040 miles per hour, so relative to an observer on the moon, you would be moving at that speed — adjusting for the fact that the moon is also orbiting around the earth at 2288 miles per hour. But also note that the earth is orbiting the sun at 66,000 miles per hour, our solar system is orbiting the galaxy at 52,000 miles per hour, and our galaxy is moving at 1,200,000 miles per hour; so from the standpoint of an observer in another galaxy you are moving at a fantastically fast speed in a series of crazy looping motions. Isaac Newton argued that there was an absolute position in space by which your true, objective speed could be measured. But Einstein dismissed that view, and the scientific consensus today is that Einstein was right — the answer to the question of how fast you are moving is relative to the location and speed of the observer.

The relativity of motion was anticipated by the aforementioned George Berkeley as early as the eighteenth century, in his Treatise Concerning the Principles of Human Knowledge (paragraphs 112-16). Berkeley’s work was subsequently read by the physicist Ernest Mach, who subsequently influenced Einstein.

Relativity theory also tells us that there is no absolute size and shape, that these also vary according to the frame of reference of an observer in relation to what is observed. An object moving at very fast speeds relative to an observer will be shortened in length, which also affects its shape. (See the examples here and here.) What is the “real” size and shape of the object? There is none — you have to specify the reference frame in order to get an answer. Professor Richard Wolfson, a physicist at Middlebury College who has a great lecture series on relativity theory, explains what happens at very fast speeds:

An example in which length contraction is important is the Stanford Linear Accelerator, which is 2 miles long as measured on Earth, but only about 3 feet long to the electrons moving down the accelerator at 0.9999995c [nearly the speed of light]. . . . [Is] the length of the Stanford Linear Accelerator ‘really’ 2 miles? No! To claim so is to give special status to one frame of reference, and that is precisely what relativity precludes. (Course Guidebook to Einstein’s Relativity and the Quantum Revolution, Lecture 10.)

In fact, from the perspective of a light particle (a photon), there is infinite length contraction — there is no distance and the entire universe looks like a point!

The final nail in the coffin of the metaphysics of modern science is surely the weird world of quantum physics. According to quantum physics, particles at the subatomic level do not occupy only one position at a particular moment of time but can exist in multiple positions at the same time — only when the subatomic particles are observed do the various possibilities “collapse” into a single outcome. This oddity led to the paradoxical thought experiment known as “Schrodinger’s Cat” (video here). The importance of the “observer effect” to modern physics is so great that some physicists, such as the late physicist John Wheeler, believed that human observation actually plays a role in shaping the very reality of the universe! Stephen Hawking holds a similar view, arguing that our observation “collapses” multiple possibilities into a single history of the universe: “We create history by our observation, rather than history creating us.” (See The Grand Design, pp. 82-83, 139-41.) There are serious disputes among scientists about whether uncertainties at the subatomic level really justify the multiverse theories of Wheeler and Hawking, but that is another story.

Nevertheless, despite the obsolescence of the metaphysical premises of modern science, when scientists talk about the methods of science, they still distinguish between the reality of objects and the unreality of what exists in the mind, and emphasize the importance of being objective at all times. Why is that? Why do scientists still use a metaphysics developed centuries ago by Kepler, Galileo, and Newton? I think this practice persists largely because the growth of knowledge since these early thinkers has led to overspecialization — if one is interested in science, one pursues a degree in chemistry, biology, or physics; if one is interested in metaphysics, one pursues a degree in philosophy. Scientists generally aren’t interested in or can’t understand what philosophers have to say, and philosophers have the same view of scientists. So science carries on with a metaphysics that is hundreds of years old and obsolete.

It’s true that the idea of objectivity was developed in response to the very real problem of the uncertainty of human sense impressions and the fallibility of the conclusions our minds draw in response to those sense impressions. Sometimes we think we see something, but we don’t. People make mistakes, they may see mirages; in extreme cases, they may hallucinate. Or we see the same thing but have different interpretations. Early scientists tried to solve this problem by separating human senses and the human mind from the “real” world of objects. But this view was philosophically dubious to begin with and has been refuted by science itself. So how do we resolve the problem of mistaken and differing perceptions and interpretations?

Well, we supplement our limited senses and minds with the senses and minds of other human beings. We gather together, we learn what others have perceived and concluded, we engage in dialogue and debate, we conduct repeated observations and check our results with the results of others. If we come to an agreement, then we have a tentative conclusion; if we don’t agree, more observation, testing, and dialogue is required to develop a picture that resolves the competing claims. In some cases we may simply end up with an explanation that accounts for why we come up with different conclusions — perhaps we are in different locations, moving at different speeds, or there is something about our sensory apparatus that causes us to sense differently. (There is an extensive literature in science about why people see colors differently due to the nature of the eye and brain.)

Central to the whole process of science is a common effort — but there is also the necessity of subduing one’s ego, acknowledging that not only are there other people smarter than we are, but that the collective efforts of even less-smart people are greater than our own individual efforts. Subduing one’s ego is also required in order to prepare for the necessity of changing one’s mind in response to new evidence and arguments. Ultimately, the search for knowledge is a social and moral enterprise. But we are not going to succeed in that endeavor by positing a reality separate from human beings and composed only of objects. (Next: Part 4)

What Does Science Explain? Part 2 – The Metaphysics of Modern Science

In my previous post, I discussed the nature of metaphysics, a theory of being and existence, in the medieval world. The metaphysics of the medieval period was strongly influenced by the ancient Greeks, particularly Aristotle, who posited four causes or explanations for why things were. In addition, Aristotle argued that existence could be understood as the result of a transition from “potentiality” to “actuality.” With the rise of modern science, argued Edwin Arthur Burtt in The Metaphysical Foundations of Modern Science, the medieval conception of existence changed. Although some of this change was beneficial, argued Burtt, there was also a loss.

The first major change that modern science brought about was the strict separation of human beings, along with human senses and desires, from the “real” universe of impersonal objects joining, separating, and colliding with each other. Rather than seeing human beings as the center or summit of creation, as the medievals did, modern scientists removed the privileged position of human beings and promoted the goal of “objectivity” in their studies, arguing that we needed to dismiss all subjective human sensations and look at objects as they were in themselves. Kepler, Galileo, and Newton made a sharp distinction between the “primary qualities” of objects and “secondary qualities,” arguing that only primary qualities were truly real, and therefore worth studying. What were the “primary qualities?”: quantity/mathematics, motion, shape, and solidity. These qualities existed within objects and were independent of human perception and sensation. The “secondary qualities” were color, taste, smell, and sound; these were subjective because they were derived from human sensations, and therefore did not provide objective facts that could advance knowledge.

The second major change that modern science brought to metaphysics was a dismissal of the medieval world’s rich and multifaceted concept of causation in favor of a focus on “efficient causation” (the impact of one object or event on another). The concept of “final causation,” that is, goal-oriented development, was neglected. In addition, the concept of “formal causation,” that is, the emergence of things out of universal forms, was reduced to mathematics; only mathematical forms expressed in the “laws of nature,” were truly real, according to the new scientific worldview. Thus, all causation was reduced to mathematical “laws of nature” directing the motion and interaction of objects.

The consequences of this new worldview were tremendous in terms of altering humanity’s conception of reality and what it meant to explain reality. According to Burtt, “From now on, it is a settled assumption for modern thought in practically every field, that to explain anything is to reduce it to its elementary parts, whose relations, where temporal in character, are conceived in terms of efficient causality solely.” (Metaphysics of Modern Science, p. 134) And although the early giants of science — Kepler, Galileo, and Newton — believed in God, their conception of God was significantly different from the medieval view. Rather than seeing God as the Supreme Good, the goal or end which continually brought all things from potentiality to actuality, they saw God in terms of the “First Efficient Cause” only. That is, God brought the laws of nature into existence, and then the universe operated like a clock or machine, which might then only occasionally need rewinding or maintenance. But once this conception of God became widespread, it was not long before people questioned whether God was necessary at all to explain the universe.

Inarguably, there were great advantages to the metaphysical views of early scientists. By focusing on mathematical models and efficient causes, while pruning away many of the non-calculable qualities of natural phenomena, scientists were able to develop excellent predictive models. Descartes gave up the study of “final causes” and focused his energies on mathematics because he felt no one could discern God’s purposes, a view adopted widely by subsequent scientists. Both Galileo and Newton put great emphasis on the importance of observation and experimentation in the study of nature, which in many cases put an end to abstract philosophical speculations on natural phenomena that gave no definite conclusions. And Newton gave precise meanings to previously vague terms like “force” and “mass,” meanings that allowed measurement and calculation.

The mistake that these early scientists made, however, was to elevate a method into a metaphysics, by proclaiming that what they studied was the only true reality, with all else existing solely in the human mind. According to Burtt,

[T]he great Newton’s authority was squarely behind that view of the cosmos which saw in man a puny, irrelevant spectator . . . of the vast mathematical system whose regular motions according to mechanical principles constituted the world of nature. . . . The world that people had thought themselves living in — a world rich with colour and sound, redolent with fragrance, filled with gladness, love and beauty, speaking everywhere of purposive harmony and creative ideals — was crowded now into minute corners in the brains of scattered organic beings. The really important world outside was a world hard, cold, colourless, silent, and dead; a world of quantity, a world of mathematically computable motions in mechanical regularity.  (pp. 238-9)

Even at the time this new scientific metaphysics was being developed, it was critiqued on various grounds by philosophers such as Leibniz, Hume, and Berkeley. These philosophers’ critiques had little long-term impact, probably because scientists offered working predictive models and philosophers did not. But today, even as science is promising an eventual “theory of everything,” the limitations of the metaphysics of modern science is causing even some scientists to rethink the whole issue of causation and the role of human sensations in developing knowledge. The necessity for rethinking the modern scientific view of metaphysics will be the subject of my next post.

What Does Science Explain? Part 1 – What is Causation?

In previous posts, I have argued that science has been excellent at creating predictive models of natural phenomena. From the origins of the universe, to the evolution of life, to chemical reactions, and the building of technological devices, scientists have learned to predict causal sequences and manipulate these causal sequences for the benefit (or occasionally, detriment) of humankind. These models have been stupendous achievements of civilization, and religious texts and institutions simply cannot compete in terms of offering predictive models.

There remains the issue, however, of whether the predictive models of science really explain all that there is to explain. While many are inclined to believe that the models of science explain everything, or at least everything that one needs to know, there are actually some serious disputes even among scientists about what causation is, what a valid explanation is, whether predictive models need to be realistic, and how real are some of the entities scientists study, such as the “laws of nature” and the mathematics that are often part of those laws.

The fundamental issues of causation, explanation, and reality are discussed in detail in a book published in 1954 entitled: The Metaphysical Foundations of Modern Science by Edwin Arthur Burtt. According to Burtt, the birth and growth of modern science came with the development of a new metaphysics, that is, the study of being and existence. Copernicus, Kepler, Galileo, and Newton all played a role in creating this new metaphysics, and it shapes how we view the world to this day.

In order to understand Burtt’s thesis, we need to back up a bit and briefly discuss the state of metaphysics before modern science — that is, medieval metaphysics. The medieval view of the world in the West was based largely on Christianity and the ancient Greek philosophers such as Aristotle, who wrote treatises on both physics and metaphysics.

Aristotle wrote that there were four types of answers to the question “why?” These answers were described by Aristotle as the “four causes,” though it has been argued that the correct translation of the Greek word that Aristotle used is “explanation” rather than “cause.” These are:

(1) Material cause

(2) Formal cause

(3) Efficient (or moving) cause

(4) Final cause

“Material cause” refers to changes that take place as a result of the material that something is made of. If a substance melts at a particular temperature, one can argue that it is the material nature of that substance that causes it to melt at that temperature. (The problem with this kind of explanation is that it is not very deep — one can then ask why a material behaves as it does.)

“Formal cause” refers to the changes that take place in matter because of the form that an object is destined to have. According to Aristotle, all objects share the same matter — it is the arrangement of matter into their proper forms that causes matter to become a rock, a tree, a bird, or a human being. Objects and living things eventually disintegrate and perish, but the forms are eternal, and they shape matter into new objects and living things that replace the old. The idea of formal causation is rooted in Plato’s theory of forms, though Aristotle modified Plato’s theory in a number of ways.

“Efficient cause” refers to the change that takes place when one object impacts another; one object or event is the cause, the other is the effect. A stick hitting a ball, a saw cutting wood, and hydrogen atoms interacting with oxygen atoms to create water are all examples of efficient causes.

“Final cause” refers to the goal, end, or purpose of a thing — the Greek word for goal is “telos.” An acorn grows into an oak tree because that is the goal or telos of an acorn. Likewise, a fertilized human ovum becomes a human being. In nature, birds fly, rain nourishes plants, and the moon orbits the earth, because nature has intended certain ends for certain things. The concept of a “final cause” is intimately related to the “formal cause,” in the sense that the forms tend to provide the ends that matter pursues.

Related to these four causes or explanations is Aristotle’s notion of potentiality and actuality. Before things come into existence, one can say that there is potential; when these things come into existence they are actualized. Hydrogen atoms and oxygen atoms have the potential to become water if they are joined in the right way, but until they are so joined, there is only potential water, not actual water. A block of marble has the potential to become a statue, but it is not actually a statue until a sculptor completes his or her work. A human being is potentially wise if he or she pursues knowledge, but until that pursuit of knowledge is carried out, there is only potentiality and not actuality. The forms and telos of nature are primarily responsible for the transformation of potentiality into actuality.

Two other aspects of the medieval view of metaphysics are worth noting. First, for the medievals, human beings were the center of the universe, the highest end of nature. Stars, planets, trees, animals, chemicals, were lower forms of being than humans and existed for the benefit of humans. Second, God was not merely the first cause of the universe — God was the Supreme Good, the goal or telos to which all creation was drawn in pursuit of its final goals and perfection. According to Burtt,

When medieval philosophers thought of what we call the temporal process it was this continuous transformation of potentiality into actuality that they had in mind. . . . God was the One who eternally exists, and ever draws into movement by his perfect beauty all that is potentially the bearer of a higher existence. He is the divine harmony of all goods, conceived as now realized in ideal activity, eternally present, himself unmoved, yet the mover of all change. (Burtt, The Metaphysical Foundations of Modern Science, pp. 94-5)

The rise of modern science, according to Burtt, led to a radical change in humanity’s metaphysical views. A great deal of this change was beneficial, in the sense that it led to predictive models that successfully answered certain questions about natural processes that were previously mysterious. However, as Burtt noted, the new metaphysics of science was also a straitjacket that constricted humanity’s pursuit of knowledge. Some human senses were unjustifiably dismissed as unreliable or deceptive and some types of causation were swept away unnecessarily. How modern science created a new metaphysics that changed humanity’s conception of reality will be discussed in part two.

The Use of Fiction and Falsehood in Science

Astrophysicist Neil deGrasse Tyson has some interesting and provocative things to say about religion in a recent interview. I tend to agree with Tyson that religions have a number of odd or even absurd beliefs that are contrary to science and reason. One statement by Tyson, however, struck me as inaccurate. According to Tyson, “[T]here are religions and belief systems, and objective truths. And if we’re going to govern a country, we need to base that governance on objective truths — not your personal belief system.” (The Daily Beast)

I have a great deal of respect for Tyson as a scientist, and Tyson clearly knows more about physics than I do. But I think his understanding of what scientific knowledge provides is naïve and unsupported by history and present day practice. The fact of the matter is that scientists also have belief systems, “mental models” of how the world works. These mental models are often excellent at making predictions, and may also be good for explanation. But the mental models of science may not be “objectively true” in representing reality.

The best mental models in science satisfy several criteria: they reliably predict natural phenomena; they cover a wide range of such phenomena (i.e., they cover much more than a handful of special cases); and they are relatively simple. Now it is not easy to create a mental model that satisfies these criteria, especially because there are tradeoffs between the different criteria. As a result, even the best scientists struggle for many years to create adequate models. But as descriptions of reality, the models, or components of the models, may be fictional or even false. Moreover, although we think that the models we have today are true, every good scientist knows that in the future our current models may be completely overturned by new models based on entirely new conceptions. Yet in many cases, scientists often respect or retain the older models because they are useful, even if the models’ match to reality is false!

Consider the differences between Isaac Newton’s conception of gravity and Albert Einstein’s conception of gravity. According to Newton, gravity is a force that attracts objects to each other. If you throw a ball on earth, the path of the ball eventually curves downward because of the gravitational attraction of the earth. In Newton’s view, planets orbit the sun because the force of gravity pulls planetary bodies away from the straight line paths that they would normally follow as a result of inertia: hence, planets move in circular orbits. But according to Einstein, gravity is not a force — gravity seems like it’s a force, but it’s actually a “fictitious force.” In Einstein’s view, objects seem to attract each other because mass warps or curves spacetime, and objects tend to follow the paths made by curved spacetime. Newton and Einstein agree that inertia causes objects in motion to continue in straight lines unless they are acted on by a force; but in Einstein’s view, planets orbit the sun because they are actually already travelling straight paths, only in curved spacetime! (Yes this makes sense — if you travel in a jet, your straightest possible path between two cities is actually curved, because the earth is round.)

Scientists agree that Einstein’s view of gravity is correct (for now). But they also continue to use Newtonian models all the time. Why? Because Newtonian models are much simpler than Einstein’s and scientists don’t want to work harder than they have to! Using Newtonian conceptions of gravity as a real force, scientists can still track the paths of objects and send satellites into orbit; Newton’s equations work perfectly fine as predictive models in most cases. It is only in extraordinary cases of very high gravity or very high speeds that scientists must abandon Newtonian models and use Einstein’s to get more accurate predictions. Otherwise scientists much prefer to assume gravity is a real force and use Newtonian models. Other fictitious forces that scientists calculate using Newton’s models are the Coriolis force and centrifugal force.

Even in cases where you might expect scientists to use Einstein’s conception of curved spacetime, there is not a consistent practice. Sometimes scientists assume that spacetime is curved, sometimes they assume spacetime is flat. According to theoretical physicist Kip Thorne, “It is extremely useful, in relativity research, to have both paradigms at one’s fingertips. Some problems are solved most easily and quickly using the curved spacetime paradigm; others, using flat spacetime. Black hole problems . . . are most amenable to curved spacetime techniques; gravitational-wave problems . . . are most amenable to flat spacetime techniques.” (Black Holes and Time Warps). Whatever method provides the best results is what matters, not so much whether spacetime is really curved or not.

The question of the reality of mental models in science is particularly acute with regard to mathematical models. For many years, mathematicians have been debating whether or not the objects of mathematics are real, and they have yet to arrive at a consensus. So, if an equation accurately predicts how natural phenomena behave, is it because the equation exists “out there” someplace? Or is it because the equation is just a really good mental model? Einstein himself argued that “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” By this, Einstein meant that it was possible to create perfectly certain mathematical models in the human mind; but that the matching of these models’ predictions to natural phenomenon required repeated observation and testing, and one could never be completely sure that one’s model was the final answer and therefore that it really objectively existed.

And even if mathematical models work perfectly in predicting the behavior of natural phenomena, there remains the question of whether the different components of the model really match to something in reality. As noted above, Newton’s model of gravity does a pretty good job of predicting motion — but the part of the model that describes gravity as a force is simply wrong. In mathematics, the set of numbers known as “imaginary numbers” are used by engineers for calculating electric current; they are used by 3D modelers; and they are used by physicists in quantum mechanics, among other applications. But that doesn’t necessarily mean that imaginary numbers exist or correspond to some real quantity — they are just useful components of an equation.

A great many scientists are quite upfront about the fact that their models may not be an accurate reflection of reality. In their view, the purpose of science is to predict the behavior of natural phenomena, and as long as science gets better and better at this, it is less important if models are proved to be a mismatch to reality. Brian Koberlein, an astrophysicist at the Rochester Institute of Technology, writes that scientific theories should be judged by the quality and quantity of their predictions, and that theories capable of making predictions can’t be proved wrong, only replaced by theories that are better at predicting. For example, he notes that the caloric theory of heat, which posited the existence of an invisible fluid within materials, was quite successful in predicting the behavior of heat in objects, and still is at present. Today, we don’t believe such a fluid exists, but we didn’t discard the theory until we came up with a new theory that could predict better. The caloric theory of heat wasn’t “proven wrong,” just replaced with something better. Koberlein also points to Newton’s conception of gravity, which is still used today because it is simpler than Einstein’s and “good enough” at predicting in most cases. Koberlein concludes that for these reasons, Einstein will “never” be wrong — we just may find a theory better at predicting.

Stephen Hawking has discussed the problem of truly knowing reality, and notes that it perfectly possible to have different theories with entirely different conceptual frameworks that work equally well at predicting the same phenomena. In a fanciful example, Hawking notes that goldfish living in a curved bowl will see straight-line movement outside the bowl as being curved, but despite this it would still be possible for goldfish to develop good predictive theories. He notes that likewise, human beings may also have a distorted picture of reality, but we are still capable of building good predictive models. Hawking calls his philosophy “model-dependent realism”:

According to model-dependent realism, it is pointless to ask whether a model is real, only whether it agrees with observation. If there are two models that both agree with observation, like the goldfish’s model and ours, then one cannot say that one is more real than the other. One can use whichever model is more convenient in the situation under consideration. (The Grand Design, p. 46)

So if science consists of belief systems/mental models, which may contain fictions or falsehoods, how exactly does science differ from religion?

Well for one thing, science far excels religion in providing good predictive models. If you want to know how the universe began, how life evolved on earth, how to launch a satellite into orbit, or how to build a computer, religious texts offer virtually nothing that can help you with these tasks. Neil deGrasse Tyson is absolutely correct about the failure of religion in this respect.  Traditional stories of the earth’s creations, as found in the Bible’s book of Genesis, were useful first attempts to understand our origins, but they have been long-eclipsed by contemporary scientific models, and there is no use denying this.

What religion does offer, and science does not, is a transcendent picture of how we ought to live our lives and an interpretation of life’s meaning according to this transcendent picture. The behavior of natural phenomena can be predicted to some extent by science, but human beings are free-willed. We can decide to love others or love ourselves above others. We can seek peace, or murder in the pursuit of power and profit. Whatever we decide to do, science can assist us in our actions, but it can’t provide guidance on what we ought to do. Religion provides that vision, and if these visions are imaginative, so are many aspects of scientific models. Einstein himself, while insisting that science was the pursuit of objective knowledge, also saw a role for religion in providing a transcendent vision:

[T]he scientific method can teach us nothing else beyond how facts are related to, and conditioned by, each other.The aspiration toward such objective knowledge belongs to the highest of which man is capabIe, and you will certainly not suspect me of wishing to belittle the achievements and the heroic efforts of man in this sphere. Yet it is equally clear that knowledge of what is does not open the door directly to what should be. . . . Objective knowledge provides us with powerful instruments for the achievements of certain ends, but the ultimate goal itself and the longing to reach it must come from another source. . . .

To make clear these fundamental ends and valuations, and to set them fast in the emotional life of the individual, seems to me precisely the most important function which religion has to perform in the social life of man.

Now fundamentalists and atheists might both agree that rejecting the truth of sacred scripture with regard to the big bang and evolution tends to undermine the transcendent visions of religion. But the fact of the matter is that scientists never reject a mental model simply because parts of the model may be fictional or false; if the model provides useful guidance, it is still a valid part of human knowledge.

Does the Flying Spaghetti Monster Exist?

In a previous post, Belief and Evidence, I addressed the argument made by many atheists that those who believe in God have the burden of proof. In this view, evidence must accompany belief, and belief in anything for which there is insufficient evidence is irrational. One popular example cited by proponents of this view is the satirical creation known as the “Flying Spaghetti Monster.” Proposed as a response to the demands by creationists for equal time in the classroom with evolutionary theory, the Flying Spaghetti Monster has been cited as an example of an absurd entity which no one has the right to believe in unless one has actual evidence. According to famous atheist Richard Dawkins, disbelievers are not required to submit evidence against the existence of either God or the Flying Spaghetti Monster, it is believers that have the burden of proof.

The problem with this philosophy is that it would seem to apply equally well to many physicists’ theories of the “multiverse,” and in fact many scientists have criticized multiverse theories on the grounds that there is no way to observe or test for other universes. The most extreme multiverse theories propose that every mathematically possible universe, each with its own slight variation on physical laws and constants, exists somewhere. Multiverse theory has even led to bizarre speculations about hypothetical entities such as “Boltzmann brains.” According to some scientists, it is statistically more likely for the random fluctuations of matter to create a free-floating brain than it is for billions of years of universal evolution to lead to brains in human bodies. (You may have heard of the claim that a million monkeys typing on a million typewriters will eventually produce the works of Shakespeare — the principle is similar.) This means that reincarnation could be possible or that we are actually Boltzmann brains that were randomly generated by matter and that we merely have the illusion that we have bodies and an actual past. According to physicist Leonard Susskind, “It is part of a much bigger set of questions about how to think about probabilities in an infinite universe in which everything that can occur, does occur, infinitely many times.”

If you think it is odd that respected scientists are actually discussing the possibility of floating brains spontaneously forming, well this is just one of the strange predictions that current multiverse theories tend to create. When one proposes the existence of an infinite number of possible universes based on an infinite variety of laws and constants, then anything is possible in some universe somewhere.

So is there a universe somewhere in which the laws of matter and energy are fine-tuned to support the existence of Flying Spaghetti Monsters? This would seem to be the logical outcome of the most extreme multiverse theories. I have hesitated to bring this argument up until now, because I am not a theoretical physicist and I do not understand the mathematics behind multiverse theory. However, I recently came across an article by Marcelo Gleiser, a physicist at Dartmouth College, who sarcastically asks “Do Fairies Live in the Multiverse? Discussing multiverse theories, Gleiser writes:

This brings me to the possible existence of fairies in the multiverse. The multiverse, a popular concept in modern theoretical physics, is an extension of the usual idea of the universe to encompass many possible variations. Under this view, our universe, the sum total of what’s within our “cosmic horizon” of 46 billion light years, would be one among many others. In many theories, different universes could have radically different properties, for example, electrons and protons with different masses and charges, or no electrons at all.

As in Jorge Luis Borges’ Library of Babel, which collected all possible books, the multiverse represents all that could be real if we tweaked the alphabet of nature, combining it in as many combinations as possible.

If by fairies we mean little, fabulous entities capable of flight and of magical deeds that defy what we consider reasonable in this world, then, yes, by all means, there could be fairies somewhere in the multiverse.

So, here we have a respected physicist arguing that the logical implication of existing multiverse theories, in which every possibility exists somewhere, is that fairies may well exist. Of course, Gleiser is not actually arguing that fairies exist — he is pointing out what happens when certain scientific theories propose infinite possibilities without actually being testable.

But if multiverse theories are correct, maybe the Flying Spaghetti Monster does exist out there somewhere.

The Mythos of Mathematics

‘Modern man has his ghosts and spirits too, you know.’

‘What?’

‘Oh, the laws of physics and of logic . . . the number system . . . the principle of algebraic substitution. These are ghosts. We just believe in them so thoroughly they seem real.’

Robert Pirsig, Zen and the Art of Motorcycle Maintenance

 

It is a popular position among physicists that mathematics is what ultimately lies behind the universe. When asked for an explanation for the universe, they point to numbers and equations, and furthermore claim that these numbers and equations are the ultimate reality, existing objectively outside the human mind. This view is known as mathematical Platonism, after the Greek philosopher Plato, who argued that the ultimate reality consisted of perfect forms.

The problem we run into with mathematical Platonism is that it is subject to some of the same skepticism that people have about the existence of God, or the gods. How do we know that mathematics exists objectively? We can’t sense mathematics directly; we only know that it is a useful tool for dealing with reality. The fact that math is useful does not prove that it exists independently of human minds. (For an example of this skepticism, see this short video).

Scholars George Lakoff and Rafael Nunez, in their book Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being, offer the provocative and fascinating thesis that mathematics consists of metaphors. That is, the abstractions of mathematics are ultimately grounded in conceptual comparisons to concrete human experiences. In the view of Lakoff and Nunez, all human ideas are shaped by our bodily experiences, our senses, and how these senses react to our environment. We try to make sense of events and things by comparing them to our concrete experiences. For example, we conceptualize time as a limited resource (“time is money”); we conceptualize status or mood in terms of space (happy is “up,” while sad is “down”); we personify events and things (“inflation is eating up profits,” “we must declare war on poverty”). Metaphors are so prevalent and taken for granted, that most of the time we don’t even notice them.

Mathematical systems, according to Lakoff and Nunez, are also metaphorical creations of the human mind. Since human beings have common experiences with space, time, and quantities, our mathematical systems are similar. But we do have a choice in the metaphors we use, and that is where the creative aspect of mathematics comes in. In other words, mathematics is grounded in common experiences, but mathematical conceptual systems are creations of the imagination. According to Lakoff and Nunez, confusion and paradoxes arise when we take mathematics literally and don’t recognize the metaphors behind mathematics.

Lakoff and Nunez point to a number of common human activities that subsequently led to the creation of mathematical abstractions. The collection of objects led to the creation of the “counting numbers,” otherwise known as “natural numbers.” The use of containers led to the notion of sets and set theory. The use of measuring tools (such as the ruler or yard stick) led to the creation of the “number line.” The number line in turn was extended to a plane with x and y coordinates (the “Cartesian plane“). Finally, in order to understand motion, mathematicians conceptualized time as space, plotting points in time as if they were points in space — time is not literally the same as space, but it is easier for human beings to measure time if it is plotted on a spatial graph.

Throughout history, while the counting numbers have been widely accepted, there have been controversies over the creation of other types of numbers. One of the reasons for these controversies is the mistaken belief that numbers must be objectively real rather than metaphorical. So the number zero was initially controversial because it made no sense to speak literally of having a collection of zero objects. Negative numbers were even more controversial because it’s impossible to literally have a negative number of objects. But as the usefulness of zero and negative numbers as metaphorical expressions and in performing calculations became clear, these numbers became accepted as “real” numbers.

The metaphor of the measuring stick/number line, according to Lakoff and Nunez, has been responsible for even more controversy and confusion. The basic problem is that a line is a continuous object, not a collection of objects. If one makes an imaginative metaphorical leap and envisions the line as a collection of objects known as segments or points, that is very useful for measuring the line, but a line is not literally a collection of segments or points that correspond to objectively existing numbers.

If you draw three points on a piece of paper, the sum of the collection of points clearly corresponds to the number three, and only the number three. But if you draw a line on a piece of paper, how many numbers does it have? Where do those numbers go? The answer is up to you, depending on what you hope to measure and how much precision you want. The only requirement is that the numbers are in order and the length of the segments is consistently defined. You can put zero on the left side of the line, the right side of the line, or in the middle. You can use negative numbers or not use negative numbers. The length of the segments can be whatever you want, as long as the definitions of segment length are consistent.

The number line is a great mental tool, but it does not objectively exist, outside of the human mind. Neglecting this fact has led to paradoxes that confounded the ancient Greeks and continue to mystify human beings to this day. The first major problem arose when the Greeks attempted to determine the ratio of the sides of a particular polygon and discovered that the ratio could not be expressed as a ratio of whole numbers, but rather as an infinite, nonrepeating decimal. For example, a right triangle with two shorter sides of length 1 would, according to the Pythagorean theorem, have a hypotenuse length equivalent to the square root of 2, which is an infinite decimal: 1.41421356. . .  This scandalized the ancient Greeks at first, because many of them had a religious devotion to the idea that whole numbers existed objectively and were the ultimate basis of reality. Nevertheless, over time the Greeks eventually accepted the so-called “irrational numbers.”

Perhaps the most famous irrational number is pi, the measure of the ratio between the circumference of a circle and its diameter: 3.14159265. . . The fact that pi is an infinite decimal fascinates people to no end, and scientists have calculated the value of pi to over 13 trillion digits. But the digital representation of pi has no objective existence — it is simply a creation of the human imagination based on the metaphor of the measuring stick / number line. There’s no reason to be surprised or amazed that the ratio of the circumference of a circle to its diameter is an infinite decimal; lines are continuous objects, and expressing lines as being composed of discrete objects known as segments is bound to lead to difficulties eventually. Moreover, pi is not necessary for the existence of circles. Even children are perfectly capable of drawing circles without knowing the value of pi. If children can draw circles without knowing the value of pi, why should the universe need to know the value of pi? Pi is simply a mental tool that human beings created to understand the ratio of certain line lengths by imposing a conceptual framework of discrete segments on a continuous quantity. Benjamin Lee Buckley, in his book The Continuity Debate, underscores this point, noting that one can use discrete tools for measuring continuity, but that truly continuous quantities are not really composed of discrete objects.

It is true that mathematicians have designated pi and other irrational numbers as “real” numbers, but the reality of the existence of pi outside the human mind is doubtful. An infinitely precise pi implies infinitely precise measurement, but there are limits to how precise one can be in reality, even assuming absolutely perfect measuring instruments. Although pi has been calculated to over 13 trillion digits, it is estimated that only 39 digits are needed to calculate the volume of the known universe to the precision of one atom! Furthermore, the Planck length is the smallest measurable length in the universe. Although quite small, the Planck length sets a definite limit on how precise pi can be in reality. At some point, depending on the size of the circle one creates, the extra digits in pi are simply meaningless.

Undoubtedly, the number line is an excellent mental tool. If we had perfect vision, perfect memory, and perfect eye-hand coordination, we wouldn’t need to divide lines into segments and count how many segments there are. But our vision is imperfect, our memories fallible, and our eye-hand coordination is imperfect. That is why we need to use versions of the number line to measure things. But we need to recognize that we are creating and imposing a conceptual tool on reality. This tool is metaphorical and, while originating in human experience, it is not reality itself.

Lakoff and Nunez point to other examples of metaphorical expressions in mathematics, such as the concept of infinity. Mathematicians discuss the infinitely large, the infinitely small, and functions in calculus that come infinitely close to some designated limit. But Lakoff and Nunez point out that the notion of actual (literal) infinity, as opposed to potential infinity, has been extremely problematic, because calculating or counting infinity is inherently an endless process. Lakoff and Nunez argue that envisioning infinity as a thing, or the result of a completed process, is inherently metaphorical, not literal. If you’ve ever heard children use the phrase “infinity plus one!” in their taunts, you can see some of the difficulties with envisioning infinity as a thing, because one can simply take the allegedly completed process and start it again. Oddly, even professional mathematicians don’t agree on the question of whether “infinity plus one” is a meaningful statement. Traditional mathematics says that infinity plus one is still infinity, but there are more recent number systems in which infinity plus one is meaningful. (For a discussion of how different systems of mathematics arrive at different answers to the same question, see this post.)

Nevertheless, many mathematicians and physicists fervently reject the idea that mathematics comes from the human mind. If mathematics is useful for explaining and predicting real world events, they argue, then mathematics must exist in objective reality, independent of human minds. But why is it important for mathematics to exist objectively? Isn’t it enough that mathematics is a useful mental tool for describing reality? Besides, if all the mathematicians in the world stopped all their current work and devoted themselves entirely to proving the objective existence of mathematical objects, I doubt that they would succeed, and mathematical knowledge would simply stop progressing.

Scientific Evidence for the Reality of Mysticism

What exactly is mysticism? One of the problems with defining and evaluating mysticism is that mystical experiences seem to be inherently personal and unobservable to outsiders. Certainly, one can observe a person meditate and then record what that person says about his or her experience while meditating, but what is real about that experience? Persons undergoing such experiences often describe a feeling of oneness with the universe, love for all creation, and communion with the divine. But anyone can dream or daydream or imagine. It would be one thing if mystics came up with great ideas during their meditative states — a cure for a disease, a viable plan for peace between warring states, or a better political system. But this generally does not happen. Mystical experiences remain personal.

Recently, however, brain scan technologies, along with knowledge of the different functional areas of the human brain, have allowed scientists for the first time to actually observe what is going on in the brains of people who are undergoing mystical experiences. And the findings are remarkable.

Studies of persons engaged in prayer or meditation indicate that the frontal lobes of participants’ brains, responsible for concentration, light up during meditation– an unsurprising conclusion. However, at the same time, the parietal lobes of the same brains go dark — these sections of the brain are responsible for an individual’s sense of self, and help a person orient him or herself to the world. So when a person claims that they experience a oneness with the universe, that appears to be exactly what is going on — the person’s sense of self is actually put to sleep. And when the sense of self disappears, so does egocentrism. Researchers have found that people who regularly engage in meditation literally reshape their brains, becoming both more attentive and compassionate. The particular religion they belong to did not matter — Buddhist, Christian, Sikh — all seemed to experience the same changes in the brain.

Research on psychedelic drugs has found evidence that psychedelics act as potent enablers of mystical experience. Psilocybin, the active chemical in “magic mushrooms,” and mescaline, from the cactus known as peyote, are psychedelics that have been used for thousands of years in Native American religious ceremonies. LSD was synthesized in 1938 from a fungus, ergot, that may have played a role as a hallucinogen in ancient Greek religions. What these chemicals have in common is that they all seem to have effects on the brain similar to what may be experienced during deep meditation: they dissolve the sense of self and enable extraordinary visions that appear to give people a radically new perspective on their lives. One research subject described her experience on psilocybin as follows: “I know that I had a merging with what I call oneness. . . . There was a time that I was being gently pulled into it, and I saw it as light. . . . It isn’t even describable. It’s not just light; it’s love.” In fact, two-thirds of study participants who received psilocybin ranked their psychedelic experience as being among the top five most spiritually significant experiences of their lives, comparable to the birth of a child or the death of a parent.

Psilocybin has even been used to treat addiction — the mystical experience seems to reboot the brain, allowing people to break old, ingrained habits. A study of fifteen smokers who had failed multiple treatments for addiction found that after therapy sessions with psilocybin, 80 percent were able to quit cigarettes for at least 6 months, an unprecedented success rate. Smokers who seemed to have a more complete mystical experience had the greatest success quitting. According to one subject, “smoking seemed irrelevant, so I stopped.” Cancer patients who underwent treatment with psilocybin had a reduction in anxiety and distress.

Brain scans of those undergoing mystical experiences under psychedelics indicate reduced activity in the part of the brain known as the “default-mode network.” This high-level part of the brain acts as something of a corporate executive for the brain. It inhibits lower-level brain functions such as emotion and memory and also creates a sense of self, enabling persons to distinguish themselves from other people and from the rest of the world. When psychedelics suppress the default-mode network, the lower brain regions are unleashed, leading to visions that may be bizarre but, in many cases, insightful. And the sense of self disappears, as one feels a merging with the rest of the world.

It’s important not to overstate the findings of these scientific studies, by citing spiritual experiences as justification for theism. These studies do not prove that God exists or that there is a supernatural dimension. The visions that people experience while under psychedelics are often chaotic and meaningless. But this sort of radical free association does seem to help people attain new perspectives and enhance their openness to new ideas. And the feeling of oneness with the universe, the dissolution of the self, is not just an unconfirmed claim — it really does seem to be supported by brain scan studies. So the mystical experience is not superstitious pre-scientific thinking, but a valid mode of thought, one which many of us, including myself, have dismissed without even trying.