Beyond the “Mechanism” Metaphor in Physics

In previous posts, I discussed the use of the “mechanism” metaphor in science. I argued that this metaphor was useful historically in helping us to make progress in understanding cause-and-effect patterns in nature, but was limited or even deceptive in a number of important respects. In particular, the field of biology is characterized by evidence of spontaneity, adaptability, progress, and cooperative behavior among life forms that make the mechanism metaphor inadequate in characterizing and explaining life.

Physics is widely regarded as the pinnacle of the “hard sciences” and, as such, the field most suited to the mechanism metaphor. In fact, many physicists are so wedded to the idea of the universe as a mechanism, that they are inclined to speak as if the universe literally was a mechanism, that we humans are actually living inside a computer simulation. Why alien races would go through the trouble of creating simulated humans such as ourselves, with such dull, slow-moving lives, is never explained. But physicists are able to get away with these wild speculations because of their stupendous success in explaining and predicting the motion and actions of objects, from the smallest particles to the largest galaxies.

Fundamental to the success of physics is the idea that all objects are subject to laws that determine their behavior. Laws are what determine how the various parts of the universal mechanism move and interact. But when one starts asking questions about what precisely physical laws are and where they come from, one runs into questions and controversies that have never been successfully resolved.

Prior to the Big Bang theory, developed in the early twentieth century, the prevailing theory among physicists was that the universe existed eternally and had no beginning. When an accumulation of astronomical observations about the expansion of the universe led to the conclusion that the universe probably began from a single point that rapidly expanded outward, physicists gradually came to accept that the idea that the universe had a beginning, in a so-called “Big Bang.” However, this raised a problem: if laws ran the universe, and the universe had a beginning, then the laws must have preexisted the universe. In fact, the laws must have been eternal.

But what evidence is there for the notion that the laws of the universe are eternal? Does it really make sense to think of the law of gravity as existing before the universe existed, before gravity itself existed, before planets, stars, space, and time existed? Does it make sense to think of the law of conservation of mass existing before mass existed, or Mendel’s laws of genetics existing before genes existed? Where and how did they exist? If you take the logic of physics far enough, one is apt to conclude that the laws of physics are some kind of God(s), or that God is a mechanism.

Furthermore, what is the evidence for the notion that laws completely determine the motion of every particle in the universe, that the universe is deterministic? Observations and experiments under controlled conditions confirmed that the laws of Newtonian physics could indeed predict the motions of various objects. But did these observations and experiments prove that all objects everywhere behaved in completely predictable patterns?

Despite some fairly large holes in the ideas of eternal laws and determinism, both ideas have been popular among physicists and among many intellectuals. There have been dissenters, however.

The French philosopher Henri Bergson (1859-1941) argued that the universe was in fact a highly dynamic system with a large degree of freedom within it. According to Bergson, our ideas about eternal laws originated in human attempts to understand the reality of change by using fixed, static concepts. These concepts were useful tools — in fact, the tools had to be fixed and static in order to be useful. But the reality that these concepts pointed to was in fact flowing, all “things” were in flux, and we made a major mistake by equating our static concepts with reality and positing a world of eternal forms, whether that of Plato or the physicists. Actual reality, according to Bergson, was “unceasing creation, the uninterrupted up-surge of novelty.” (Henri Bergson, The Creative Mind, p. 7) Moreover, the flow of time was inherently continuous; we could try to measure time by chopping it into equal segments based on the ticking of a clock or by drawing a graph with units of time along one axis, but real time did not consist of segments any more than a flowing river consisted of segments. Time is a “vehicle of creation and choice” that refutes the idea of determinism. (p. 75)

Bergson did not dispute the experimental findings of physics, but argued that the laws of physics were insufficient to describe what the universe was really like. Physicists denied the reality of time and “unceasing creation,” according to Bergson, because scientists were searching for repeatable patterns, paying little or no attention to what was genuinely new:

[A]gainst this idea of the absolute originality and unforeseeability of forms our whole intellect rises in revolt. The essential function of our intellect, as the evolution of life has fashioned it, is to be a light for our conduct, to make ready for our action on things, to foresee, for a given situation, the events, favorable or unfavorable, which may follow thereupon. Intellect therefore instinctively selects in a given situation whatever is like something already known. . .  Science carries this faculty to the highest possible degree of exactitude and precision, but does not alter its essential character. Like ordinary knowledge, in dealing with things science is concerned only with the aspect of repetition. (Henri Bergson, Creative Evolution, p. 29)

Bergson acknowledged the existence of repetitive patterns in nature, but rather than seeing these patterns as reflecting eternal and wholly deterministic laws, Bergson proposed a different metaphor. Drawing upon the work of the French philosopher Felix Ravaisson, Bergson argued that nature develops “habits” of behavior in the same manner that human beings develop habits, from initial choices of behavior that over time become regular and subconscious: “Should we not then imagine nature, in this form, as an obscured consciousness and a dormant will? Habit thus gives us the living demonstration of this truth, that mechanism is not sufficient to itself: it is, so to speak, only the fossilized residue of a spiritual activity.” In Bergson’s view, spiritual activity was the ultimate foundation of reality, not the habits/mechanisms that resulted from it (The Creative Mind, pp. 197-98, 208).

Bergson’s views did not go over well with most scientists. In 1922, in Paris, Henri Bergson publicly debated Albert Einstein about the nature of time. (See Jimena Canales, The Physicist and the Philosopher: Einstein, Bergson, and the Debate that Changed Our Understanding of Time). Einstein’s theory of relativity posited that there was no absolute time that ticked at the same rate for every body in the universe. Time was linked to space in a single space-time continuum, the movement of bodies was entirely deterministic, and this movement could be predicted by calculating the space-time coordinates of these bodies. In Einstein’s view, there was no sharp distinction between past, present, and future — all events existed in a single block of space-time. This idea of a “block universe” is still predominant in physics today, though it is not without dissenters.

Most people have a “presentist” view of reality.

But physicists prefer the “block universe” view, in which all events are equally real.

Source: Time in Cosmology

 

In fact, when Einstein’s friend Michele Besso passed away in 1955, Einstein wrote a letter of condolence to Besso’s family in which he expressed his sympathies to the family but also declared that the separation between past, past, and future was an illusion anyway, so death did not mean anything. (The Physicist and the Philosopher, pp. 338-9)

It is widely believed that Bergson lost his 1922 debate with Einstein, in large part because Bergson did not fully understand Einstein’s theory of relativity. Nevertheless, while physicists everywhere eventually came to accept relativity, many rejected Einstein’s notion of a completely determinist universe which moved as predictably as a mechanism. The French physicist Louis de Broglie and the Japanese physicist Satosi Watanabe were proponents of Bergson and argued that the indeterminacy of subatomic particles supported Bergson’s view of the reality of freedom, the flow of time, and change. Einstein, on the other hand, never did accept the indeterminacy of quantum physics and insisted to his dying day that there must be “hidden” variables that would explain everything.  (The Physicist and the Philosopher, pp. 234-38)

 

_____________________________

 

Moving forward to the present day, the debate over the reality of time has been rekindled by Lee Smolin, a theoretical physicist at the Perimeter Institute for Theoretical Physics. In Time Reborn, Smolin proposes that time is indeed real and that the neglect of this fact has hindered progress in physics and cosmology. Contrary to what you may have been taught in your science classes, Smolin argues that the laws of nature are not eternal and precise but emergent and approximate. Borrowing the theory of evolution from biology, Smolin argues that the laws of the universe evolve over time, that genuine novelty is real, and that the laws are not precise iron laws but approximate, granting a degree of freedom to what was formerly considered a rigidly deterministic universe.

One major problem with physics, Smolin argues, is that scientists tend to generalize or extrapolate based on conclusions drawn from laboratory experiments conducted under highly controlled conditions, with extraneous variables carefully excluded — Smolin calls this “physics in a box.” Now there is nothing inherently wrong with “physics in a box” — carefully controlled experiments that exclude extraneous variables are absolutely essential to progress in scientific knowledge. The problem is that one cannot take a law derived from such a controlled experiment and simply scale it up to apply to the entire universe; Smolin calls this the “cosmological fallacy.” As Smolin argues, it makes no sense to simply scale up the findings from these controlled experiments, because the universe contains everything, including the extraneous variables! Controlled experiments are too restricted and artificial to serve as an adequate basis for a theory that includes everything. Instead of generalizing from the bottom up based on isolated subsystems of the universe, physicists must construct theories of the whole universe, from the top down. (Time Reborn, pp. 38-39, 97)

Smolin is not the first scientist to argue that the laws of nature may have evolved over time. Smolin points to the eminent physicists Paul Dirac, John Archibald Wheeler, and Richard Feynman as previous proponents of the idea that the laws may have evolved. (Time Reborn, pp. xxv-xxvi) But all of these theorists were preceded by the American philosopher and scientist Charles Sanders Peirce (1839-1914), who argued that “the only possible way of accounting for the laws of nature and for uniformity in general is to suppose them results of evolution.” (Time Reborn, p. xxv) Dr. Smolin gives credit to Charles Sanders Peirce for originating this idea, and proposes two ways in which the laws of nature have evolved.

The first way is through a series of “Big Bangs,” in which each new universe selects different laws each time. Smolin argues that there must have been an endless succession of Big Bangs in the past which have led to our current universe with its particular set of laws. (p. 120) Furthermore, Smolin proposes that black holes create new, baby universes, each with its own laws — so the black holes in our universe are the parents of other universes, and our own universe is the child of a black hole in some other universe! (pp. 123-25) Unfortunately, it seems impossible to adequately prove this theory, unless there is some possible way of observing these other universes with their different laws.

Smolin also proposes that laws can arise at the quantum level based on what he calls the “principle of precedence.” Smolin makes an analogy to Anglo-Saxon law, in which the decisions of judges in the past serve as precedents for decisions made today and in the future, in an ever-growing body of “common law.” The idea is that everything in the universe has a tendency to develop habits; when a truly novel event occurs, and then occurs again, and again, it settles into a pattern of repetition; that settled pattern of repetition indicates the development of a new law of nature. The law did not previously exist eternally — it emerged out of habit. (Time Reborn, pp. 146-53) Furthermore, rather than being bound by deterministic laws, the universe remains genuinely open and free, able to build new forms on top of existing forms. Smolin argues, “In the time-bound picture I propose, the universe is a process for breeding novel phenomena and states of organization, which will forever renew itself as it evolves to states of ever higher complexity and organization. The observational record tells us unambiguously that the universe is getting more interesting as time goes on.” (p. 194)

And yet, despite his openness to the idea of genuine novelty in the evolution of the universe, even Smolin is unable to get away from the idea of mechanisms being ultimately responsible for everything. Smolin writes that the universe began with a particular set of initial conditions and then asks “What mechanism selected the actual initial conditions out of the infinite set of possibilities?” (pp. 97-98) He does not consider the possibility that in the beginning, perhaps there was no mechanism. Indeed, this is the problem with any cosmology that aims to provide a total explanation for existence; as one goes back in time searching for origins, one eventually reaches a first cause that has no prior cause, and thus no causal explanation. One either has to posit a creator-God, an eternal self-sufficient mechanism, or throw up one’s hands and accept that we are faced with an unsolvable mystery.

In fact, Smolin is not as radical as his inspiration, Charles Sanders Peirce. According to Peirce, the universe did not start out with a mechanism but rather began from a condition of maximum freedom and spontaneity, only gradually adopting certain “habits” which evolved into laws. Furthermore, even after the development of laws, the universe retained a great deal of chance and spontaneity. Laws specified certain regularities, but even within these regularities, a great deal of freedom still existed. For example, life forms may have been bound to the surface of the earth and subject to the regular rotation of the earth, the orbit of the earth around the sun, and the limitations of biology, but nonetheless life forms still retained considerable freedom.

Peirce, who believed in God, held that the universe was pervaded not by mechanism but mind, which was by definition characterized by freedom and spontaneity. As the mind/universe developed certain habits, these habits congealed into laws and solid matter. In Peirce’s view, “matter . . . [is] mere specialised and partially deadened mind.” (“The Law of Mind,” The Monist, vol. 11, no. 4, July 1892) This view is somewhat similar to the view of the physicist Werner Heisenberg, who noted that “Energy is in fact the substance from which all elementary particles, all atoms and therefore all things are made. . . .”

One contemporary philosopher, Philip Goff of Durham University, following Peirce and other thinkers, has argued that consciousness is not restricted to humans but in fact pervades the universe, from the smallest subatomic particles to the most intelligent human beings. This theory is known as panpsychism. (see Goff’s book Galileo’s Error: Foundations for a New Science of Consciousness) Goff does not argue that atoms, rocks, water, stars, etc. are like humans in their thought process, but that they have experiences, albeit very primitive and simple experiences compared to humans. The difference between the experiences of a human and the experiences of an electron is vast, but the difference still exists on a spectrum; there is no sharp dividing line that dictates that experience ends when one gets down to the level of insects, cells, viruses, molecules, atoms, or subatomic particles. In Dr. Goff’s words:

Human beings have a very rich and complex experience; horses less so; mice less so again. As we move to simpler and simpler forms of life, we find simpler and simpler forms of experience. Perhaps, at some point, the light switches off, and consciousness disappears. But it’s at least coherent to suppose that this continuum of consciousness fading while never quite turning off carries on into inorganic matter, with fundamental particles having almost unimaginably simple forms of experience to reflect their incredibly simple nature. That’s what panpsychists believe. . . .

The starting point of the panpsychist is that physical science doesn’t actually tell us what matter is. . . . Physics tells us absolutely nothing about what philosophers like to call the intrinsic nature of matter: what matter is, in and of itself. So it turns out that there is a huge hole in our scientific story. The proposal of the panpsychist is to put consciousness in that hole. Consciousness, for the panpsychist, is the intrinsic nature of matter. There’s just matter, on this view, nothing supernatural or spiritual. But matter can be described from two perspectives. Physical science describes matter “from the outside,” in terms of its behavior. But matter “from the inside”—i.e., in terms of its intrinsic nature—is constituted of forms of consciousness.

Unfortunately, there is, at present, no proof that the universe is pervaded by mind, nor is there solid evidence that the laws of physics have evolved. We do know that the science of physics is no longer as deterministic as it used to be. The behavior of subatomic particles is not fully predictable, despite the best efforts of physicists for nearly a century, and many physicists now acknowledge this. We also know that the concepts of laws and determinism often fail in the field of biology — there are very few actual laws in biology, and the idea that these laws preexisted life itself seems incoherent. No biologist will tell you that human beings in their present state are the inevitable product of determinist evolution and that if we started the planet Earth all over again, we would end up in 4.5 billion years with exactly the same types of life forms, including humans, that we have now. Nor can biologists predict the movement of life forms the same way that physicists can predict the movement of planets. Life forms do their own thing. Human beings retain their free will and moral responsibility. Still, the notion that the laws of physics are pre-existent and eternal appears to have no solid ground either; it is merely one of those assumptions that has become widely accepted because few have sought to challenge it or even ask for evidence.

What Does Science Explain? Part 5 – The Ghostly Forms of Physics

The sciences do not try to explain, they hardly even try to interpret, they mainly make models. By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of such a mathematical construct is solely and precisely that it is expected to work — that is, correctly to describe phenomena from a reasonably wide area. Furthermore, it must satisfy certain esthetic criteria — that is, in relation to how much it describes, it must be rather simple. — John von Neumann (“Method in the Physical Sciences,” in The Unity of Knowledge, 1955)

Now we come to the final part of our series of posts, “What Does Science Explain?” (If you have not already, you can peruse parts 1, 2, 3, and 4 here). As I mentioned in my previous posts, the rise of modern science was accompanied by a change in humanity’s view of metaphysics, that is, our theory of existence. Medieval metaphysics, largely influenced by ancient philosophers, saw human beings as the center or summit of creation; furthermore, medieval metaphysics proposed a sophisticated, multifaceted view of causation. Modern scientists, however, rejected much of medieval metaphysics as subjective and saw reality as consisting mainly of objects impacting or influencing each other in mathematical patterns.  (See The Metaphysical Foundations of Modern Science by E.A. Burtt.)

I have already critically examined certain aspects of the metaphysics of modern science in parts 3 and 4. For part 5, I wish to look more closely at the role of Forms in causation — what Aristotle called “formal causation.” This theory of causation was strongly influenced by Aristotle’s predecessor Plato and his Theory of Forms. What is Plato’s “Theory of Forms”? In brief, Plato argued that the world we see around us — including all people, trees, and animals, stars, planets and other objects — is not the true reality. The world and the things in it are imperfect and perishable realizations of perfect forms that are eternal, and that continually give birth to the things we see. That is, forms are the eternal blueprints of perfection which the material world imperfectly represents. True philosophers do not focus on the material world as it is, but on the forms that material things imperfectly reflect. In order to judge a sculpture, painting, or natural setting, a person must have an inner sense of beauty. In order to evaluate the health of a particular human body, a doctor must have an idea of what a perfectly healthy human form is. In order to evaluate a government’s system of justice, a citizen must have an idea about what perfect justice would look like. In order to critically judge leaders, citizens must have a notion of the virtues that such a leader should have, such as wisdom, honesty, and courage.  Ultimately, according to Plato, a wise human being must learn and know the perfect forms behind the imperfect things we see: we must know the Form of Beauty, the Form of Justice, the Form of Wisdom, and the ultimate form, the Form of Goodness, from which all other forms flow.

Unsurprisingly, many intelligent people in the modern world regard Plato’s Theory of Forms as dubious or even outrageous. Modern science teaches us that sure knowledge can only be obtained by observation and testing of real things, but Plato tells us that our senses are deceptive, that the true reality is hidden behind what we sense. How can we possibly confirm that the forms are real? Even Plato’s student Aristotle had problems with the Theory of Forms and argued that while the forms were real, they did not really exist until they were manifested in material things.

However, there is one important sense in which modern science retained the notion of formal causation, and that is in mathematics. In other words, most scientists have rejected Plato’s Theory of Forms in all aspects except for Plato’s view of mathematics. “Mathematical Platonism,” as it is called, is the idea that mathematical forms are objectively real and are part of the intrinsic order of the universe. However, there are also sharp disagreements on this subject, with some mathematicians and scientists arguing that mathematical forms are actually creations of the human imagination.

The chief difference between Plato and modern scientists on the study of mathematics is this: According to Plato, the objects of geometry — perfect squares, perfect circles, perfect planes — existed nowhere in the material world; we only see imperfect realizations. But the truly wise studied the perfect, eternal forms of geometry rather than their imperfect realizations. Therefore, while astronomical observations indicated that planetary bodies orbited in imperfect circles, with some irregularities and errors, Plato argued that philosophers must study the perfect forms instead of the actual orbits! (The Republic, XXVI, 524D-530C) Modern science, on the other hand, is committed to observation and study of real orbits as well as the study of perfect mathematical forms.

Is it tenable to hold the belief that Plato and Aristotle’s view of eternal forms is mostly subjective nonsense, but they were absolutely right about mathematical forms being real? I argue that this selective borrowing of the ancient Greeks doesn’t quite work, that some of the questions and difficulties with proving the reality of Platonic forms also afflicts mathematical forms.

The main argument for mathematical Platonism is that mathematics is absolutely necessary for science: mathematics is the basis for the most important and valuable physical laws (which are usually in the form of equations), and everyone who accepts science must agree that the laws of nature or the laws of physics exist. However, the counterargument to this claim is that while mathematics is necessary for human beings to conduct science and understand reality, that does not mean that mathematical objects or even the laws of nature exist objectively, that is, outside of human minds.

I have discussed some of the mysterious qualities of the “laws of nature” in previous posts (here and here). It is worth pointing out that there remains a serious debate among philosophers as to whether the laws of nature are (a) descriptions of causal regularities which help us to predict or (b) causal forces in themselves. This is an important distinction that most people, including scientists, don’t notice, although the theoretical consequences are enormous. Physicist Kip Thorne writes that laws “force the Universe to behave the way it does.” But if laws have that kind of power, they must be ubiquitous (exist everywhere), eternal (exist prior to the universe), and have enormous powers although they have no detectable energy or mass — in other words, the laws of nature constitute some kind of supernatural spirit. On the other hand, if laws are summary descriptions of causation, these difficulties can be avoided — but then the issue arises: do the laws of nature or of physics really exist objectively, outside of human minds, or are they simply human-constructed statements about patterns of causation? There are good reasons to believe the latter is true.

The first thing that needs to be said is that nearly all these so-called laws of nature are actually approximations of what really happens in nature, approximations that work only under certain restrictive conditions. Both of these considerations must be taken into account, because even the approximations fall apart outside of certain pre-specified conditions. Newton’s law of universal gravitation, for example, is not really universal. It becomes increasingly inaccurate under conditions of high gravity and very high velocities, and at the atomic level, gravity is completely swamped by other forces. Whether one uses Newton’s law depends on the specific conditions and the level of accuracy one requires. Kepler’s laws of planetary motion are an approximation based on the simplifying assumption of a planetary system consisting of one planet. The ideal gas law is an approximation which becomes inaccurate under conditions of low temperature and/or high pressure. The law of multiple proportions works for simple molecular compounds, but often fails for complex molecular compounds. Biologists have discovered so many exceptions to Mendel’s laws of genetics that some believe that Mendel’s laws should not even be considered laws.

The fact of the matter is that even with the best laws that science has come up with, we still can’t predict the motions of more than two interacting astronomical bodies without making unrealistic simplifying assumptions. Michael Scriven, a mathematician and philosopher at Claremont Graduate University, has concluded that the laws of nature or physics are actually cobbled together by scientists based on multiple criteria:

Briefly we may say that typical physical laws express a relationship between quantities or a property of systems which is the simplest useful approximation to the true physical behavior and which appears to be theoretically tractable. “Simplest” is vague in many cases, but clear for the extreme cases which provide its only use. “Useful” is a function of accuracy and range and purpose. (Michael Scriven, “The Key Property of Physical Laws — Inaccuracy,” in Current Issues in the Philosophy of Science, ed. Herbert Feigl)

The response to this argument is that it doesn’t disprove the objective existence of physical laws — it simply means that the laws that scientists come up with are approximations to real, objectively existing underlying laws. But if that is the case, why don’t scientists simply state what the true laws are? Because the “laws” would actually end up being extremely long and complex statements of causation, with so many conditions and exceptions that they would not really be considered laws.

An additional counterargument to mathematical Platonism is that while mathematics is necessary for science, it is not necessary for the universe. This is another important distinction that many people overlook. Understanding how things work often requires mathematics, but that doesn’t mean the things in themselves require mathematics. The study of geometry has given us pi and the Pythagorean theorem, but a child does not need to know these things in order to draw a circle or a right triangle. Circles and right triangles can exist without anyone, including the universe, knowing the value of pi or the Pythagorean theorem. Calculus was invented in order to understand change and acceleration; but an asteroid, a bird, or a cheetah is perfectly capable of changing direction or accelerating without needing to know calculus.

Even among mathematicians and scientists, there is a significant minority who have argued that mathematical objects are actually creations of the human imagination, that math may be used to model aspects of reality, but it does not necessarily do so. Mathematicians Philip J. Davis and Reuben Hersh argue that mathematics is the study of “true facts about imaginary objects.” Derek Abbot, a professor of engineering, writes that engineers tend to reject mathematical Platonism: “the engineer is well acquainted with the art of approximation. An engineer is trained to be aware of the frailty of each model and its limits when it breaks down. . . . An engineer . . . has no difficulty in seeing that there is no such a thing as a perfect circle anywhere in the physical universe, and thus pi is merely a useful mental construct.” (“The Reasonable Ineffectiveness of Mathematics“) Einstein himself, making a distinction between mathematical objects used as models and pure mathematics, wrote that “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” Hartry Field, a philosopher at New York University, has argued that mathematics is a useful fiction that may not even be necessary for science. Field goes to show that it is possible to reconstruct Newton’s theory of gravity without using mathematics. (There is more discussion on this subject here and here.)

So what can we conclude about the existence of forms? I have to admit that although I’m skeptical, I have no sure conclusions. It seems unlikely that forms exist outside the mind . . . but I can’t prove they don’t exist either. Forms do seem to be necessary for human reasoning — no thinking human can do without them. And forms seem to be rooted in reality: perfect circles, perfect squares, and perfect human forms can be thought of as imaginative projections of things we see, unlike Sherlock Holmes or fire-breathing dragons or flying spaghetti monsters, which are more creatively fictitious. Perhaps one could reconcile these opposing views on forms by positing that the human mind and imagination is part of the universe itself, and that the universe is becoming increasingly consciously aware.

Another way to think about this issue was offered by Robert Pirsig in Zen and the Art of Motorcycle Maintenance. According to Pirsig, Plato made a mistake by positing Goodness as a form. Even considered as the highest form, Goodness (or “Quality,” in Pirsig’s terminology) can’t really be thought of as a static thing floating around in space or some otherworldly realm. Forms are conceptual creations of humans who are responding to Goodness (Quality). Goodness itself is not a form, because it is not an unchanging thing — it is not static or even definable. It is “reality itself, ever changing, ultimately unknowable in any kind of fixed, rigid way.” (p. 342) Once we let go of the idea that Goodness or Quality is a form, we can realize that not only is Goodness part of reality, it is reality.

As conceptual creations, ideal forms are found in both science and religion. So why, then, does there seem to be such a sharp split between science and religion as modes of knowledge? I think it comes down to this: science creates ideal forms in order to model and predict physical phenomena, while religion creates ideal forms in order to provide guidance on how we should live.

Scientists like to see how things work — they study the parts in order to understand how the wholes work. To increase their understanding, scientists may break down certain parts into smaller parts, and those parts into even smaller parts, until they come to the most fundamental, indivisible parts. Mathematics has been extremely useful in modeling and understanding these parts of nature, so scientists create and appreciate mathematical forms.

Religion, on the other hand, tends to focus on larger wholes. The imaginative element of religion envisions perfect states of being, whether it be the Garden of Eden or the Kingdom of Heaven, as well as perfect (or near perfect) humans who serve as prophets or guides to a better life. Religion is less concerned with how things work than with how things ought to work, how things ought to be. So religion will tend to focus on subjects not covered by science, including the nature and meaning of beauty, love, and justice. There will always be debates about the appropriateness of particular forms in particular circumstances, but the use of forms in both science and religion is essential to understanding the universe and our place in it.

What Does Science Explain? Part 4 – The Ends of the Universe

Continuing my series of posts on “What Does Science Explain?” (parts 1, 2 , and 3 here), I wish today to discuss the role of teleological causation. Aristotle referred to teleology in his discussion of four causes as “final causation,” because it referred to the goals or ends of all things (the Greek word “telos” meaning “goal,” “purpose,” or “end.”) From a teleological viewpoint, an acorn grows into an oak tree, a bird takes flight, and a sculptor creates statues because these are the inherent and intended ends of the acorn, bird, and sculptor. Medieval metaphysics granted a large role for teleological causation in its view of the universe.

According to E.A. Burtt in The Metaphysics of Modern Science, the growth of modern science changed the idea of causation, focusing almost exclusively on efficient causation (objects impacting or affecting other objects). The idea of final (goal-oriented) causation was dismissed. And even though the early modern scientists such as Galileo and Newton believed in God, their notion of God was significantly different from the traditional medieval conception of God. Rather than seeing God as the Supreme Good, which continually draws all things to higher levels of being, early modern scientists reduced God to the First Efficient Cause, who merely started the mechanism of the universe and then let it run.

It was not unreasonable for early scientists to focus on efficient causation rather than final causation. It was often difficult to come up with testable hypotheses and workable predictive models by assuming long-term goals in nature. There was always a strong element of mystery about what the true ends of nature were and it was very difficult to pin down these alleged goals. Descartes believed in God, but also wrote that it was impossible to know what God’s goals were. For that reason, it is quite likely that science in its early stages needed to overcome medieval metaphysics in order to make its first great discoveries about nature. Focusing on efficient causation was simpler and apt to bring quicker results.

However, now that science has advanced over the centuries, it is worth revisiting the notion of teleological causation as a means of filling in gaps in our current understanding of nature. It is true that the concept of long-term goals for physical objects and forces often does not help very much in terms of developing useful, short-term predictive models. But final causation can help make sense of long-term patterns which may not be apparent when making observations over short periods of time. Processes that look purposeless and random in the short-term may actually be purposive in the long-term. We know that an acorn under the right conditions will eventually become an oak tree, because the process and the outcome of development can be observed within a reasonable period of time and that knowledge has been passed on to us. If our knowledge base began at zero and we came across an acorn for the first time, we would find it extremely difficult to predict the long-term future of that acorn merely by cutting it up and examining it under a microscope.

So, does the universe have long-term, goal-oriented patterns that may be hidden among the short-term realities of contingency and randomness? A number of physicists began to speculate that this was the case in the late twentieth century, when their research indicated that the physical forces and constants of the universe can exist in only a very narrow range of possibilities in order for life to be possible, or even for the universe to exist. Change in even one of the forces or constants could make life impossible or cause the universe to self-destruct in a short period of time. In this view, the evolution of the universe and of life on earth has been subject to a great deal of randomness, but the cosmic structure and conditions that made evolution possible are not at all random. As the physicist Freeman Dyson has noted:

It is true that we emerged in the universe by chance, but the idea of chance is itself only a cover for our ignorance. . . . The more I examine the universe and study the details of its architecture, the more evidence I find that the universe in some sense must have known that we were coming. (Disturbing the Universe, p. 250)

In what way did the universe “know we were coming?” Consider the fact that in the early universe after the Big Bang, the only elements that existed were the “light” elements hydrogen and helium, along with trace amounts of lithium and beryllium. A universe with only four elements would certainly be simple, but there would not be much to build upon. Life, at least as we know it, requires not just hydrogen but at a minimum carbon, oxygen, nitrogen, phosphorus, and sulfur. How did these and other heavier elements come into being? Stars produced them, through the process of fusion. In fact, stars have been referred to as the “factories” of heavy elements. Human beings today consist primarily of oxygen, followed by carbon, hydrogen, nitrogen, calcium, and phosphorous. Additional elements compose less than one percent of the human body, but even most of these elements are essential to human life. Without the elements produced earlier by stars we would not be here. It has been aptly said that human beings are made of “stardust.”

So why did stars create the heavier elements? After all, the universe could have gotten along quite well without additional elements. Was it random chance that created the heavy elements? Not really. Random chance plays a role in many natural events, but the creation of heavy elements in stars requires some precise conditions — it is not just a churning jumble of subatomic particles. The astronomer Fred Hoyle was the first scientist to study how stars made heavy elements, and he noted that the creation of heavy elements required very specific values in order for the process to work. When he concluded his research Hoyle remarked, “A common sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature. The numbers one calculates from the facts seem to me so overwhelming as to put this conclusion almost beyond question.”

The creation of heavier elements by the stars does not necessarily mean that the universe intended specifically to create human beings, but it does seem to indicate that the universe somehow “knew” that heavy elements would be required to create higher forms of being, above and beyond the simple and primitive elements created by the Big Bang. In that sense, creating life is plausibly a long-term goal of the universe.

And what about life itself? Does it make sense to use teleology to study the behavior of life forms? Biologist Peter Corning has argued that while science has long pursued reductionist explanations of phenomena, it is impossible to really know biological systems without pursuing holistic explanations centered on the purposive behavior of organisms.

According to reductionism, all things can be explained by the parts that they are made of — human beings are made of tissues and organs, which are made of cells, which are made of chemical compounds, which are made of atoms, which are made of subatomic particles. In the view of many scientists, everything about human beings can in principle be explained by actions at the subatomic level. Peter Corning, however, argues that this conception is mistaken. Reductionism is necessary for partially explaining biological systems, but it is not sufficient. The reason for this is that the wholes are greater than the parts, and the behavior of wholes often has characteristics that are radically different from the parts that they are made of. For example, it would be dangerous to add pure hydrogen or oxygen to a fire, but when hydrogen atoms and oxygen atoms are combined in the right way — as H2O — one obtains a chemical compound that is quite useful for extinguishing fires. The characteristics of the molecule are different from the characteristics of the atoms in it. Likewise, at the subatomic level, particles may have no definite position in space and can even be said to exist in multiple places at once; but human beings only exist in one place at a time, despite the fact that human beings are made of subatomic particles. The behavior of the whole is different from the behavior of the parts. The transformation of properties that occurs when parts form new wholes is known as “emergence.”

Corning notes that when one incorporates analysis of wholes into theoretical explanation, there is goal-oriented “downward causation” as well as “upward causation.” For example, a bird seeks the goal of food and a favorable environment, so when it begins to get cold, that bird flies thousands of miles to a warmer location for the winter. The atoms that make up that bird obviously go along for the ride, but a scientist can’t use the properties of the atoms to predict the flight of these atoms; only by looking at the properties of the bird as a whole can a scientist predict what the atoms making up the bird are going to do. The bird as a whole doesn’t have complete control over the atoms composing its body, but it clearly has some control. Causation goes down as well as up. Likewise, neuropsychologist Roger Sperry has argued that human consciousness is a whole that influences the parts of the brain and body just as the parts of the brain and body influence the consciousness: “[W]e contend that conscious or mental phenomena are dynamic, emergent, pattern (or configurational) properties of the living brain in action . . . these emergent pattern properties in the brain have causal control potency. . . ” (“Mind, Brain, and Humanist Values,” Bulletin of the Atomic Scientists, Sept 1966) In Sperry’s view, the values created by the human mind influence human behavior as much as the atoms and chemicals in the human body and brain.

Science has traditionally viewed the evolution of the universe as upward causation only, with smaller parts joining into larger wholes as a result of the laws of nature and random chance. This view of causation is illustrated in the following diagram:

reductionism

But if we take seriously the notion of emergence and purposive action, we have a more complex picture, in which the laws of nature and random chance constrain purposive action and life forms, but do not entirely determine the actions of life forms — i.e., there is both upward and downward causation:

reductionism_and_holism

It is important to note that this new view of causation does not eliminate the laws of nature — it just sets limits on what the laws of nature can explain. Specifically, the laws of nature have their greatest predictive power when we are dealing with the simplest physical phenomena; the complex wholes that are formed by the evolutionary process are less predictable because they can to some extent work around the laws of nature by employing the new properties that emerge from the joining of parts. For example, it is relatively easy to predict the motion of objects in the solar system by using the laws of nature; it is not so easy to predict the motion of life forms because life forms have properties that go beyond the simple properties possessed by objects in the solar system. As Robert Pirsig notes in Lila, life can practically be defined by its ability to transcend or work around the static patterns of the laws of nature:

The law of gravity . . . is perhaps the most ruthlessly static pattern of order in the universe. So, correspondingly, there is no single living thing that does not thumb its nose at that law day in and day out. One could almost define life as the organized disobedience of the law of gravity. One could show that the degree to which an organism disobeys this law is a measure of its degree of evolution. Thus, while the single protozoa just barely get around on their cilia, earthworms manage to control their distance and direction, birds fly into the sky, and man goes all the way to the moon. (Lila (1991), p. 143.

Many scientists still resist the notion of teleological causation. But it could be argued that even scientists who vigorously deny that there is any purpose in the universe actually have an implicit teleology. Their teleology is simply the “laws of nature” themselves, and either the inner goal of all things is to follow those laws, or it is the goal of the laws to compel all things to follow their commands. Other implicit teleologies can be found in scientists’ assumptions that nature is inherently simple; that mathematics is the language of nature; or that all the particles and forces in the nature play some necessary role. According to physicist Paul Davies,

There is . . . an unstated but more or less universal feeling among physicists that everything that exists in nature must have a ‘place’ or a role as part of some wider scheme, that nature should not indulge in profligacy by manifesting gratuitous entities, that nature should not be arbitrary. Each facet of physical reality should link in with the others in a ‘natural’ and logical way. Thus, when the particle known as the muon was discovered in 1937, the physicist Isidor Rabi was astonished. ‘Who ordered that?’ he exclaimed. (Paul Davies, The Mind of God: The Scientific Basis for a Rational World, pp. 209-10.

Ultimately, however, one cannot fully discuss the goals or ends of the universe without exploring the notion of Ideal Forms — that is, a blueprint for all things to follow or aspire to. The subject of Ideal Forms will be discussed in my next post.

Scientific Revolutions and Relativism

Recently, Facebook CEO Mark Zuckerberg chose Thomas Kuhn’s classic The Structure of Scientific Revolutions for his book discussion group. And although I don’t usually try to update this blog with the most recent controversy of the day, this time I can’t resist jumping on the Internet bandwagon and delving into this difficult, challenging book.

To briefly summarize, Kuhn disputes the traditional notion of science as one of cumulative growth, in which Galileo and Kepler build upon Copernicus, Newton builds upon Galileo and Kepler, and Einstein builds upon Newton. This picture of cumulative growth may be accurate for periods of “normal science,” Kuhn writes, when the community of scientists are working from the same general picture of the universe. But there are periods when the common picture of the universe (which Kuhn refers to as a “paradigm”) undergoes a revolutionary change. A radically new picture of the universe emerges in the community of scientists, old words and concepts obtain new meanings, and scientific consensus is challenged by conflict between traditionalists and adherents of the new paradigm. If the new paradigm is generally successful in solving new puzzles AND solving older puzzles that the previous paradigm solved, the community of scientists gradually moves to accept the new paradigm — though this often requires that stubborn traditionalists eventually die off.

According to Kuhn, science as a whole progressed cumulatively in the sense that science became better and better at solving puzzles and predicting things, such as the motions of the planets and stars. But the notion that scientific progress was bringing us closer and closer to the Truth, was in Kuhn’s view highly problematic. He felt there was no theory-independent way of saying what was really “out there” — conceptions of reality were inextricably linked to the human mind and its methods of perceiving, selecting, and organizing information. Rather than seeing science as evolving closer and closer to an ultimate goal, Kuhn made an analogy to biological evolution, noting that life evolves into higher forms, but there is no evidence of a final goal toward which life is heading. According to Kuhn,

I do not doubt, for example, that Newton’s mechanics improves on Aristotle’s and that Einstein’s improves on Newton’s as instruments for puzzle-solving. But I can see in their succession no coherent direction of ontological development. On the contrary, in some important respects, though by no means all, Einstein’s general theory of relativity is closer to Aristotle’s than either of them is to Newton’s. (Structure of Scientific Revolutions, postscript, pp. 206-7.)

This claim has bothered many. In the view of Kuhn’s critics, if a theory solves more puzzles, predicts more phenomena to a greater degree of accuracy, the theory must be a more accurate picture of reality, bringing us closer and closer to the Truth. This is a “common sense” conclusion that would seem to be irrefutable. One writer in Scientific American comments on Kuhn’s appeal to “relativists,” and argues:

Kuhn’s insight forced him to take the untenable position that because all scientific theories fall short of absolute, mystical truth, they are all equally untrue. Because we cannot discover The Answer, we cannot find any answers. His mysticism led him to a position as absurd as that of the literary sophists who argue that all texts — from The Tempest to an ad for a new brand of vodka — are equally meaningless, or meaningful. (“What Thomas Kuhn Really Thought About Scientific ‘Truth’“)

Many others have also charged Kuhn with relativism, so it is important to take some time to examine this charge.

What people seem to have a hard time grasping is what scientific theories actually accomplish. Scientific theories or models can in fact be very good at solving puzzles or predicting outcomes without being an accurate reflection of reality — in fact, in many cases theories have to be unrealistic in order to be useful! Why? A theory must accomplish several goals, but some of these goals are incompatible, requiring a tradeoff of values. For example, the best theories generalize as much as possible, but since there are exceptions to almost every generalization, there is a tradeoff between generalizability and accuracy. As Nancy Cartwright and Ronald Giere have pointed out, the “laws of physics” have many exceptions when matched to actual phenomena; but we cherish the laws of physics because of their wide scope: they subsume millions of observations under a small number of general principles, even though specific cases usually don’t exactly match the predictions of any one law.

There is also a tradeoff between accuracy and simplicity. Complete accuracy in many cases may require dozens of complex calculations; but most of the time, complete accuracy is not required, so scientists go with the simplest possible principles and calculations. For example, when dealing with gravity, Newton’s theory is much simpler than Einstein’s, so scientists use Newton’s equations until circumstances require them to use Einstein’s equations. (For more on theoretical flexibility, see this post.)

Finally, there is a tradeoff between explanation and prediction. Many people assume that explanation and prediction are two sides of the same coin, but in fact it is not only possible to predict outcomes without having a good causal model, sometimes focusing on causation gets in the way of developing a good predictive model. Why? Sometimes it’s difficult to observe or measure causal variables, so you build your model using variables that are observable and measurable even if those variables are merely associated with certain outcomes and may not cause those outcomes. To choose a very simple example, a model that posits that a rooster crowing leads to the rising of the sun can be a very good predictive model while saying nothing about causation. And there are actually many examples of this in contemporary scientific practice. Scientists working for the Netflix corporation on improving the prediction of customers’ movie preferences have built a highly valuable predictive model using associations between certain data points, even though they don’t have a true causal model. (See Galit Shmueli, “To Explain or Predict” in Statistical Science, 2010, vol. 25, no. 3)

Not only is there no single, correct way to make these value tradeoffs, it is often the case that one can end up with multiple, incompatible theories that deal with the same phenomena, and there is no obvious choice as to which theory is best. As Kuhn has pointed out, new theories become widely accepted among the community of scientists only when the new theory can account for anomalies in the old theory AND yet also conserve at least most of the predictions of the old theory. Even so, it is not long before even newer theories come along that also seem to account for the same phenomena equally well. Is it relativism to recognize this fact? Not really. Does the reality of multiple, incompatible theories mean that every person’s opinion is equally valid? No. There are still firm standards in science. But there can be more than one answer to a problem. The square root of 1,000,000 can be 1000 or -1000. That doesn’t mean that any answer to the square root of 1,000,000 is valid!

Physicist Stephen Hawking and philosopher Ronald Giere have made the analogy between scientific theories and maps. A map is an attempt to reduce a very large, approximately spherical, three dimensional object — the earth — to a flat surface. There is no single correct way to make a map, and all maps involve some level of inaccuracy and distortion. If you want accurate distances, the areas of the land masses will be inaccurate, and vice versa. With a small scale, you can depict large areas but lose detail. If you want to depict great detail, you will have to make a map with a larger scale. If you want to depict all geographic features, your map may become so cluttered with detail it is not useful, so you have to choose which details are important — roads, rivers, trees, buildings, elevation, agricultural areas, etc. North can be “up” on your map, but it does not have to be. In fact, it’s possible to make an infinite number of valid maps, as long as they are useful for some purpose. That does not mean that anyone can make a good map, that there are no standards. Making good maps requires knowledge and great skill.

As I noted above, physicists tend to prefer Newton’s theory of gravity rather than Einstein’s to predict the motion of celestial objects because it is simpler. There’s nothing wrong with this, but it is worth pointing out that Einstein’s picture of gravity is completely different from Newton’s. In Newton’s view, space and time are separate, absolute entities, space is flat, and gravity is a force that pulls objects away from the straight lines that the law of inertia would normally make them follow. In Einstein’s view, space and time are combined into one entity, spacetime, space and time are relative, not absolute, spacetime is curved in the presence of mass, and when objects orbit a planet it is not because the force of gravity is overcoming inertia (gravity is in fact a “fictitious force“), but because objects are obeying the law of inertia by following the curved paths of spacetime! In terms of prediction, Einstein’s view of gravity offers an incremental improvement to Newton’s, but Einstein’s picture of gravity is so radically different, Kuhn was right in seeing Einstein’s theory as a revolution. But scientists continue to use Newton’s theory, because it mostly retains the value of prediction while excelling in the value of simplicity.

Stephen Hawking explains why science is not likely to progress to a single, “correct” picture of the universe:

[O]our brains interpret the input from our sensory organs by making a model of the world. When such a model is successful at explaining events, we tend to attribute to it, and the elements and concepts that constitute it, the quality of reality or absolute truth. But there may be different ways in which one could model the same physical situation, with each employing different fundamental elements and concepts. If two such physical theories or models accurately predict the same events, one cannot be said to be more real than the other; rather we are free to use whichever model is more convenient.  (The Grand Design, p. 7)

I don’t think this is “relativism,” but if people insist that it is relativism, it’s not Kuhn who is the guilty party. Kuhn is simply exposing what scientists do.

What Are the Laws of Nature? – Part Two

In a previous post, I discussed the mysterious status of the “laws of nature,” pointing out that these laws seem to be eternal, omnipresent, and possessing enormous power to shape the universe, although they have no mass and no energy.

There is, however, an alternative view of the laws of nature proposed by thinkers such as Ronald Giere and Nancy Cartwright, among others. In this view, it is a fallacy to suppose that the laws of nature exist as objectively real entities — rather, what we call the laws of nature are simplified models that the human mind creates to explain and predict the operations of the universe. The laws were created by human beings to organize information about the cosmos. As such, the laws are not fully accurate descriptions of how the universe actually works, but generalizations; and like nearly all generalizations, there are numerous exceptions when the laws are applied to particular circumstances. We retain the generalizations because they excel at organizing and summarizing vast amounts of information, but we should never make the mistake of assuming that the generalizations are real entities. (See Science Without Laws and How the Laws of Physics Lie.)

Consider one of the most famous laws of nature, Isaac Newton’s law of universal gravitation. According to this law, the gravitational relationship between any two bodies in the universe is determined by the size (mass) of the two bodies and their distance from each other. More specifically, any two bodies in the universe attract each other with a force that is (1) directly proportional to the product of their masses and (2) inversely proportional to the square of the distance between them.  The equation is quite simple:

F = G \frac{m_1 m_2}{r^2}\

where F is the force between two masses, G is a gravitational constant, m1 and m2 are the masses of the two bodies and r is the distance between the center of the two bodies.

Newton’s law was quite valuable in helping predict the motions of the planets in our solar system, but in some cases the formula did not quite match to astronomical observations. The orbit of the planet Mercury in particular never fit Newton’s law, no matter how much astronomers tried to fiddle with the law to get the right results. It was only when Einstein introduced his theory of relativity that astronomers could correctly predict the motions of all the planets, including Mercury. Why did Einstein’s theory work better for Mercury? Because as the planet closest to the sun, Mercury is most affected by the massive gravitation of the sun, and Newton’s law becomes less accurate under the conditions of massive gravitation.

Einstein’s equations for gravity are known as the “field equations,” and although they are better at predicting the motions of the planets, they are extremely complex — too complex really for many situations. In fact, physicist Stephen Hawking has noted that scientists still often use Newton’s law of gravity because it is much simpler and a good enough approximation in most cases.

So what does this imply about the reality of Newton’s law of universal gravitation? Does Newton’s law float around in space or in some transcendent realm directing the motions of the planets, until the gravitation becomes too large, and then it hands off its duties to the Einstein field equations? No, of course not. Newton’s law is an approximation that works for many, but not all cases. Physicists use it because it is simple and “good enough” for most purposes. When the approximations become less and less accurate, a physicist may switch to the Einstein field equations, but this is a human value judgment, not the voice of nature making a decision to switch equations.

One other fact is worth noting: in Newton’s theory, gravity is a force between two bodies. In Einstein’s theory, gravity is not a real force — what we call a gravitational force is simply how we perceive the distortion of the space-time fabric caused by massive objects. Physicists today refer to gravity as a “fictitious force.” So why do professors of physics continue to use Newton’s law and teach this “fictitious force” law to their students? Because it is simpler to use and still a good enough approximation for most cases. Newton’s law can’t possibly be objectively real — if it is, Einstein is wrong.

The school of thought known as “scientific realism” would dispute these claims, arguing that even if the laws of nature as we know them are approximations, there are still real, objective laws underneath these approximations, and as science progresses, we are getting closer and closer to knowing what these laws really are. In addition, they argue that it would be absurd to suppose that we can possibly make progress in technology unless we are getting better and better in knowing what the true laws are really like.

The response of Ronald Giere and Nancy Cartwright to the realists is as follows: it’s a mistake to assume that if our laws are approximations and our approximations are getting better and better that therefore there must be real laws underneath. What if nature is inherently so complex in its causal variables and sequences that there is no objectively real law underneath it all? Nancy Cartwright notes that engineers who must build and maintain technological devices never apply the “laws of nature” directly to their work without a great deal of tinkering and modifications to get their mental models to match the specific details of their device. The final blueprint that engineers may create is a highly specific and highly complex model that is a precise match for the device, but of very limited generalizability to the universe as a whole. In other words, there is an inherent and unavoidable tradeoff between explanatory power and accuracy. The laws of nature are valued by us because they have very high explanatory power, but specific circumstances are always going to involve a mix of causal forces that refute the predictions of the general law. In order to understand how two bodies behave, you not only need to know gravity, you need to know the electric charge of the two bodies, the nuclear force, any chemical forces, the temperature, the speed of the objects, and additional factors, some of which can never be calculated precisely. According to Cartwright,

. . . theorists tend to think that nature is well-regulated; in the extreme, that there is a law to cover every case. I do not. I imagine that natural objects are much like people in societies. Their behavior is constrained by some specific laws and by a handful of general principles, but it is not determined in detail, even statistically. What happens on most occasions is dictated by no law at all. . . . God may have written just a few laws and grown tired. We do not know whether we are living in a tidy universe or an untidy one. (How the Laws of Physics Lie, p. 49)

Cartwright makes it clear that she believes in causal powers in nature — it’s just that causal powers are not the same as laws, which are simply general principles for organizing information.

Some philosophers and scientists would go even further. They argue that science is able to develop and improve models for predicting phenomena, but the underlying nature of reality cannot be grasped directly, even if our models are quite excellent at predicting. This is because there are always going to be aspects of nature that are non-observable and there are often multiple theories that can explain the same phenomenon. This school of thought is known as instrumentalism.

Stephen Hawking appears to be sympathetic to such a view. In a discussion of his use of “imaginary time” to model how the universe developed, Hawking stated “a scientific theory is just a mathematical model we make to describe our observations: it exists only in our minds. So it is meaningless to ask: which is real, “real” or “imaginary” time? It is simply a matter of which is the more useful description.” (A Brief History of Time, p. 144) In a later essay, Hawking made the case for what he calls “model-dependent realism.” He argues:

it is pointless to ask whether a model is real, only whether it agrees with observation. If two models agree with observation, neither one can be considered more real than the other. A person can use whichever model is more convenient in the situation under consideration. . . . Each theory may have its own version of reality, but according to model-dependent realism, that diversity is acceptable, and none of the versions can be said to be more real than any other.

Hawking concludes that given these facts, it may well be impossible to develop a unified theory of everything, that we may have to settle for a diversity of models. (It’s not clear to me how Hawking’s “model-dependent realism” differs from instrumentalism, since they seem to share many aspects.)

Intuitively, we are apt to conclude that our progress in technology is proof enough that we are understanding reality better and better, getting closer and closer to the Truth. But it’s actually quite possible for science to develop better and better predictive models while still retaining very serious doubts and disputes about many fundamental aspects of reality. Among physicists and cosmologists today, there is still disagreement on the following issues: are there really such things as subatomic particles, or are these entities actually fields, or something else entirely?; is the flow of time an illusion, or is time the chief fundamental reality?; are there an infinite number of universes in a wider multiverse, with infinite versions of you, or is this multiverse theory a mistaken interpretation of uncertainty at the quantum level?; are the constants of the universe really constant, or do they sometimes change?; are mathematical objects themselves the ultimate reality, or do they exist only in the mind? A number of philosophers of science have concluded that science does indeed progress by creating more and better models for predicting, but they make an analogy to evolution: life forms may be advancing and improving, but that doesn’t mean they are getting closer and closer to some final goal.

Referring back to my previous post, I discussed the view that the “laws of nature” appear to exist everywhere and have the awesome power to shape the universe and direct the motions of the stars and planets, despite the fact that the laws themselves have no matter and no energy. But if the laws of nature are creations of our minds, what then? I can’t prove that there are no real laws behind the mental models that we create. It seems likely that there must be some such laws, but perhaps they are so complex that the best we can do is create simplified models of them. Or perhaps we must acknowledge that the precise nature of the cosmological order is mysterious, and any attempt to understand and describe this order must use a variety of concepts, analogies, and stories created by our minds. Some of these concepts, analogies, and stories are clearly better than others, but we will never find one mental model that is a perfect fit for all aspects of reality.

What Are the Laws of Nature?

According to modern science, the universe is governed by laws, and it is the job of scientists to discover those laws. However, the question of where these laws come from, and what their precise nature is, remains mysterious.

If laws are all that are needed to explain the origins of the universe, the laws must somehow have existed prior to the universe, that is, eternally. But this raises some puzzling issues. Does it really make sense to think of the law of gravity as existing before the universe existed, before gravity itself existed, before planets, stars, space, and time existed?  Does it make sense to speak of the law of conservation of mass existing before mass existed? For that matter, does it make sense to speak of Mendel’s laws of genetics existing before there was DNA, before there were nucleotides to make up DNA, before there were even atoms of carbon and nitrogen to make up nucleotides? It took the universe 150 million years to 1 billion years to create the first heavy elements, including atoms of carbon and nitrogen. Were Mendel’s laws of genetics sitting around impatiently that whole time waiting for something to happen? Or does it make sense to think of laws evolving with the universe, in which case we still have a chicken-egg question — did evolving laws precede the creation of material forms or did evolving material forms precede the laws?

Furthermore, where do the laws of nature exist? Do they exist in some other-worldly Platonic realm beyond time and space? Many, if not most, mathematicians and physicists are inclined to believe that mathematical equations run the universe, and these equations exist objectively. But if laws/equations govern the operations of the universe, they must exist everywhere, even though we can’t sense them directly at all. Why? Because, according to Einstein, information cannot travel instantaneously across large distances – in fact, information cannot travel faster than the speed of light. Now, the radius of the universe is 46 billion light years, so if we imagine the laws of nature floating around in space at the center of the universe, it would take at least 46 billion years for the commands issued by the laws of nature to reach the edge of the universe — much too slow. Even within our tiny solar system, it takes a little over 8 minutes for light from the sun to reach the earth, so information flow across even that small distance would involve a significant time lag. However, our astronomical observations indicate no lag time — the effect of laws is instantaneous, indicating that the laws must exist everywhere — in other words, laws of nature have the property of omnipresence.

What sort of power do the laws of nature have? Since they direct the operations of the universe, they must have immense power. Either they have the capability to directly shape and move stars, planets, and entire galaxies, or they simply issue commands that stars, planets, and galaxies follow. In either case, should not this power be detectable as a form of energy? And if it is a form of energy, shouldn’t this energy have the potential to be converted into matter, according to the principle of mass-energy equivalence? In that case, the laws of nature should, in principle, be observable as energy or mass. But the laws of nature appear to have no detectable energy and no detectable mass.

Finally, there is the question of the fundamental unity of the laws of nature, and where that unity comes from. A mere collection of unconnected laws does not necessarily bring about order. Laws have to be integrated in a harmonic fashion so that they establish a foundation of order and allow the evolution of increasingly complex forms, from hydrogen atoms to heavier atomic elements to molecules to DNA to complex life forms to intelligent life forms. The fact of the matter is that it does not take much variation in the values of certain physical principles to cause a collapse of the universe or the development of a universe that is incapable of supporting life. According to physicist Paul Davies:

There are endless ways in which the universe might have been totally chaotic. It might have had no laws at all, or merely an incoherent jumble of laws that caused matter to behave in disorderly or unstable ways. . . . the various force of nature are not just a haphazard conjunction of disparate influences. They dovetail together in a mutually supportive way which bestows upon nature  stability and harmony. . .  (The Mind of God: The Scientific Basis for a Rational World, pp. 195-96)

There is a counterargument to this claim of essential unity in the laws of nature: according to theories of the multiverse, new universes are constantly being created with different physical laws and parameters — we just happen to live in a universe that supports life because only a universe that supports life can have observers who speculate about the orderliness of the universe! However, multiverse theories have been widely criticized for being non-falsifiable, since we can’t directly observe other universes.

So, if we are the believe the findings of modern science, the laws of nature have the following characteristics:

  1. They have existed eternally, prior to everything.
  2. They are omnipresent – they exist everywhere.
  3. They are extremely powerful, though they have no energy and no mass.
  4. They are unified and integrated in such a way as to allow the development of complex forms, such as life (at least in this universe, the only universe we can directly observe).

Are these not the characteristics of a universal spirit? Moreover, is not this spirit by definition supernatural, i.e., existing above nature and responsible for the operations of nature?

Please note that I am not arguing here that the laws of nature prove the existence of a personal God who is able to shape, reshape, and interfere with the laws of nature anytime He wishes. I think that modern science has more than adequately demonstrated that the idea of a personal being who listens to our prayers and temporarily suspends or adjusts the laws of nature in response to our prayers or sins is largely incompatible with the evidence we have accumulated over hundreds of years. Earthquakes happen because of shifting tectonic plates, not because certain cities have committed great evils. Disease happens because viruses and bacteria mutate, reproduce, and spread, not because certain people deserve disease. And despite the legend of Moses saving the Jews by parting the Red Sea and then destroying the Pharaoh’s army, God did not send a tsunami to wipe out the Nazis — the armies of the Allied Forces had to do that.

What I am arguing is that if you look closely at what modern science claims about the laws of nature, there is not much that separates these laws from the concept of a universal spirit, even if this spirit is not equivalent to an omnipotent, personal God.

The chief objection to the idea of the laws of nature as a universal spirit is that the laws of nature have the characteristics of mindless regularity and determinism, which are not the characteristics we think of when we think of a spirit. But consider this: the laws of nature do not in fact dictate invariable regularities in all domains, but in fact allow scope for indeterminacy, freedom, and creativity.

Consider activity at the subatomic level. Scientists have studied the behavior of subatomic particles for many decades, and they have discovered laws of behavior for those particles, but the laws are probabilistic, not deterministic. Physicist Richard Feynman, who won a Nobel Prize for his work on the physics of subatomic particles, described the odd world of subatomic behavior as follows: “The electron does whatever it likes.” It travels through space and time in all possible ways, and can even travel backward in time! Feynman was able to offer guidance on how to predict the future location of an electron, but only in terms of a probability based on calculating all the possible paths that the electron could choose.

This freedom on the subatomic level manifests itself in behavior on the atomic level, particularly in the element known as carbon. As Robert Pirsig notes:

One physical characteristic that makes carbon unique is that it is the lightest and most active of the group IV atoms whose chemical bonding characteristics are ambiguous. Usually the positively valanced metals in groups I through III combine chemically with negatively valanced nonmetals in groups V through VII and not with other members of their own group. But the group containing carbon is halfway between the metals and nonmetals, so that sometimes carbon combines with metals and sometimes with nonmetals and sometimes it just sits there and doesn’t combine with anything, and sometimes it combines with itself in long chains and branched trees and rings. . . . this ambiguity of carbon’s bonding preferences was the situation the weak Dynamic subatomic forces needed. Carbon bonding was a balanced mechanism they could take over. It was a vehicle they could steer to all sorts of freedom by selecting first one bonding preference and then another in an almost unlimited variety of ways. . . . Today there are more than two million known compounds of carbon, roughly twenty times as many as all the other known chemical compounds in the world. The chemistry of life is the chemistry of carbon. What distinguishes all the species of plants and animals is, in the final analysis, differences in the way carbon atoms choose to bond. (Lila, p. 168.)

And the life forms constructed by carbon atoms have the most freedom of all — which is why there are few invariable laws in biology that allow predictions as accurate as the predictions of physical systems. A biologist will never be able to predict the motion and destiny of a life form in the same way an astrophysicist can predict the motion of the planets in a solar system.

If you think about the nature of the universal order, regularity and determinism is precisely what is needed on the largest scale (stars, planets, and galaxies), with spontaneity and freedom restricted to the smaller scale of the subatomic/atomic and biological. If stars and planets were as variable and unpredictable as subatomic particles and life forms, there would be no stable solar systems, and no way for life to develop. Regularity and determinism on the large scale provides the stable foundation and firm boundaries needed for freedom, variety, and experimentation on the small scale. In this conception, universal spirit contains the laws of nature, but also has a freedom that goes beyond the laws.

However, it should be noted that there is another view of the laws of nature. In this view, the laws of nature do not have any existence outside of the human mind — they are simply approximate models of the cosmic order that human minds create to understand that order. This view will be discussed in a subsequent post.

What Are the Laws of Nature? – Part Two