Zen and the Art of Science: A Tribute to Robert Pirsig

Author Robert Pirsig, widely acclaimed for his bestselling books, Zen and the Art of Motorcycle Maintenance (1974) and Lila (1991), passed away in his home on April 24, 2017. A well-rounded intellectual equally at home in the sciences and the humanities, Pirsig made the case that scientific inquiry, art, and religious experience were all particular forms of knowledge arising out of a broader form of knowledge about the Good or what Pirsig called “Quality.” Yet, although Pirsig’s books were bestsellers, contemporary debates about science and religion are oddly neglectful of Pirsig’s work. So what did Pirsig claim about the common roots of human knowledge, and how do his arguments provide a basis for reconciling science and religion?

Pirsig gradually developed his philosophy as response to a crisis in the foundations of scientific knowledge, a crisis he first encountered while he was pursuing studies in biochemistry. The popular consensus at the time was that scientific methods promised objectivity and certainty in human knowledge. One developed hypotheses, conducted observations and experiments, and came to a conclusion based on objective data. That was how scientific knowledge accumulated.

However, Pirsig noted that, contrary to his own expectations, the number of hypotheses could easily grow faster than experiments could test them. One could not just come up with hypotheses – one had to make good hypotheses, ones that could eliminate the need for endless and unnecessary observations and testing. Good hypotheses required mental inspiration and intuition, components that were mysterious and unpredictable.  The greatest scientists were precisely like the greatest artists, capable of making immense creative leaps before the process of testing even began.  Without those creative leaps, science would remain on a never-ending treadmill of hypothesis development – this was the “infinity of hypotheses” problem.  And yet, the notion that science depended on intuition and artistic leaps ran counter to the established view that the scientific method required nothing more than reason and the observation and recording of an objective reality.

Consider Einstein. One of history’s greatest scientists, Einstein hardly ever conducted actual experiments. Rather, he frequently engaged in “thought experiments,” imagining what it would be like to chase a beam of light, what it would feel like to be in a falling elevator, and what a clock would look like if the streetcar he was riding raced away from the clock at the speed of light.

One of the most fruitful sources of hypotheses in science is mathematics, a discipline which consists of the creation of symbolic models of quantitative relationships. And yet, the nature of mathematical discovery is so mysterious that mathematicians themselves have compared their insights to mysticism. The great French mathematician Henri Poincare believed that the human mind worked subliminally on problems, and his work habit was to spend no more than two hours at a time working on mathematics. Poincare believed that his subconscious would continue working on problems while he conducted other activities, and indeed, many of his great discoveries occurred precisely when he was away from his desk. John von Neumann, one of the best mathematicians of the twentieth century, also believed in the subliminal mind. He would sometimes go to sleep with a mathematical problem on his mind and wake up in the middle of the night with a solution. The Indian mathematical genius Srinivasa Ramanujan was a Hindu mystic who believed that solutions were revealed to him in dreams by the goddess Namagiri.

Intuition and inspiration were human solutions to the infinity-of-hypotheses problem. But Pirsig noted there was a related problem that had to be solved — the infinity of facts.  Science depended on observation, but the issue of which facts to observe was neither obvious nor purely objective.  Scientists had to make value judgments as to which facts were worth close observation and which facts could be safely overlooked, at least for the moment.  This process often depended heavily on an imprecise sense or feeling, and sometimes mere accident brought certain facts to scientists’ attention. What values guided the search for facts? Pirsig cited Poincare’s work The Foundations of Science. According to Poincare, general facts were more important than particular facts, because one could explain more by focusing on the general than the specific. Desire for simplicity was next – by beginning with simple facts, one could begin the process of accumulating knowledge about nature without getting bogged down in complexity at the outset. Finally, interesting facts that provided new findings were more important than facts that were unimportant or trivial. The point was not to gather as many facts as possible but to condense as much experience as possible into a small volume of interesting findings.

Research on the human brain supports the idea that the ability to value is essential to the discernment of facts.  Professor of Neuroscience Antonio Damasio, in his book Descartes’ Error: Emotion, Reason, and the Human Brain, describes several cases of human beings who lost the part of their brain responsible for emotions, either because of an accident or a brain tumor.  These persons, some of whom were previously known as shrewd and smart businessmen, experienced a serious decline in their competency after damage took place to the emotional center of their brains.  They lost their capacity to make good decisions, to get along with other people, to manage their time, or to plan for the future.  In every other respect, these persons retained their cognitive abilities — their IQs remained above normal and their personality tests resulted in normal scores.  The only thing missing was their capacity to have emotions.  Yet this made a huge difference.  Damasio writes of one subject, “Elliot”:

Consider the beginning of his day: He needed prompting to get started in the morning and prepare to go to work.  Once at work he was unable to manage his time properly; he could not be trusted with a schedule.  When the job called for interrupting an activity and turning to another, he might persist nonetheless, seemingly losing sight of his main goal.  Or he might interrupt the activity he had engaged, to turn to something he found more captivating at that particular moment.  Imagine a task involving reading and classifying documents of a given client.  Elliot would read and fully understand the significance of the material, and he certainly knew how to sort out the documents according to the similarity or disparity of their content.  The problem was that he was likely, all of a sudden, to turn from the sorting task he had initiated to reading one of those papers, carefully and intelligently, and to spend an entire day doing so.  Or he might spend a whole afternoon deliberating on which principle of categorization should be applied: Should it be date, size of document, pertinence to the case, or another?   The flow of work was stopped. (p. 36)

Why did the loss of emotion, which might be expected to improve decision-making by making these persons coldly objective, result in poor decision-making instead?  According to Damasio, without emotions, these persons were unable to value, and without value, decision-making in the face of infinite facts became hopelessly capricious or paralyzed, even with normal or above-normal IQs.  Damasio noted, “the cold-bloodedness of Elliot’s reasoning prevented him from assigning different values to different options, and made his decision-making landscape hopelessly flat.” (p. 51) Damasio discusses several other similar case studies.

So how would it affect scientific progress if all scientists were like the subjects Damasio studied, free of emotion, and therefore, hypothetically capable of perfect objectivity?  Well it seems likely that science would advance very slowly, at best, or perhaps not at all.  After all, the same tools for effective decision-making in everyday life are needed for the scientific enterprise as well. A value-free scientist would not only be unable to sustain the social interaction that science requires, he or she would be unable to develop a research plan, manage his or her time, or stick to a research plan.

_________

Where Pirsig’s philosophy becomes particularly controversial and difficult to understand is in his approach to the truth. The dominant view of truth today is known as the “correspondence” theory of truth – that is, any human statement that is true must correspond precisely to something objectively real. In this view, the laws of physics and chemistry are real because they correspond to actual events that can be observed and demonstrated. Pirsig argues on the contrary that in order to understand reality, human beings must invent symbolic and conceptual models, that there is a large creative component to these models (it is not just a matter of pure correspondence to reality), and that multiple such models can explain the same reality even if they are based on wholly different principles. Math, logic, and even the laws of physics are not “out there” waiting to be discovered – they exist in the mind, which doesn’t mean that these things are bad or wrong or unreal.

There are several reasons why our symbolic and conceptual models don’t correspond literally to reality, according to Pirsig. First, there is always going to be a gap between reality and the concepts we use to describe reality, because reality is continuous and flowing, while concepts are discrete and static. The creation of concepts necessarily calls for cutting reality into pieces, but there is no one right way to divide reality, and something is always lost when this is done. In fact, Pirsig noted, our very notions of subjectivity and objectivity, the former allegedly representing personal whims and the latter representing truth, rested upon an artificial division of reality into subjects and objects; in fact, there were other ways of dividing reality that could be just as legitimate or useful. In addition, concepts are necessarily static – they can’t be always changing or we would not be able to make sense of them. Reality, however, is always changing. Finally, describing reality is not always a matter of using direct and literal language but may require analogy and imaginative figures of speech.

Because of these difficulties in expressing reality directly, a variety of symbolic and conceptual models, based on widely varying principles, are not only possible but necessary – necessary for science as well as other forms of knowledge. Pirsig points to the example of the crisis that occurred in mathematics in the nineteenth century. For many centuries, it was widely believed that geometry, as developed by the ancient Greek mathematician Euclid, was the most exact of all of the sciences.  Based on a small number of axioms from which one could deduce multiple propositions, Euclidean geometry represented a nearly perfect system of logic.  However, while most of Euclid’s axioms were seemingly indisputable, mathematicians had long experienced great difficulty in satisfactorily demonstrating the truth of one of the chief axioms on which Euclidean geometry was based. This slight uncertainty led to an even greater crisis of uncertainty when mathematicians discovered that they could reverse or negate this axiom and create alternative systems of geometry that were every bit as logical and valid as Euclidean geometry.  The science of geometry was gradually replaced by the study of multiple geometries. Pirsig cited Poincare, who pointed out that the principles of geometry were not eternal truths but definitions and that the test of a system of geometry was not whether it was true but how useful it was.

So how do we judge the usefulness or goodness of our symbolic and conceptual models? Traditionally, we have been told that pure objectivity is the only solution to the chaos of relativism, in which nothing is absolutely true. But Pirsig pointed out that this hasn’t really been how science has worked. Rather, models are constructed according to the often competing values of simplicity and generalizability, as well as accuracy. Theories aren’t just about matching concepts to facts; scientists are guided by a sense of the Good (Quality) to encapsulate as much of the most important knowledge as possible into a small package. But because there is no one right way to do this, rather than converging to one true symbolic and conceptual model, science has instead developed a multiplicity of models. This has not been a problem for science, because if a particular model is useful for addressing a particular problem, that is considered good enough.

The crisis in the foundations of mathematics created by the discovery of non-Euclidean geometries and other factors (such as the paradoxes inherent in set theory) has never really been resolved. Mathematics is no longer the source of absolute and certain truth, and in fact, it never really was. That doesn’t mean that mathematics isn’t useful – it certainly is enormously useful and helps us make true statements about the world. It’s just that there’s no single perfect and true system of mathematics. (On the crisis in the foundations of mathematics, see the papers here and here.) Mathematical axioms, once believed to be certain truths and the foundation of all proofs, are now considered definitions, assumptions, or hypotheses. And a substantial number of mathematicians now declare outright that mathematical objects are imaginary, that particular mathematical formulas may be used to model real events and relationships, but that mathematics itself has no existence outside the human mind. (See The Mathematical Experience by Philip J. Davis and Reuben Hersh.)

Even some basic rules of logic accepted for thousands of years have come under challenge in the past hundred years, not because they are absolutely wrong, but because they are inadequate in many cases, and a different set of rules is needed. The Law of the Excluded Middle states that any proposition must be either true or false (“P” or “not P” in symbolic logic). But ever since mathematicians discovered propositions which are possibly true but not provable, a third category of “possible/unknown” has been added. Other systems of logic have been invented that use the idea of multiple degrees of truth, or even an infinite continuum of truth, from absolutely false to absolutely true.

The notion that we need multiple symbolic and conceptual models to understand reality remains controversial to many. It smacks of relativism, they argue, in which every person’s opinion is as valid as another person’s. But historically, the use of multiple perspectives hasn’t resulted in the abandonment of intellectual standards among mathematicians and scientists. One still needs many years of education and an advanced degree to obtain a job as a mathematician or scientist, and there is a clear hierarchy among practitioners, with the very best mathematicians and scientists working at the most prestigious universities and winning the highest awards. That is because there are still standards for what is good mathematics and science, and scholars are rewarded for solving problems and advancing knowledge. The fact that no one has agreed on what is the One True system of mathematics or logic isn’t relevant. In fact, physicist Stephen Hawking has argued:

[O]ur brains interpret the input from our sensory organs by making a model of the world. When such a model is successful at explaining events, we tend to attribute to it, and to the elements and concepts that constitute it, the quality of reality or absolute truth. But there may be different ways in which one could model the same physical situation, with each employing different fundamental elements and concepts. If two such physical theories or models accurately predict the same events, one cannot be said to be more real than the other; rather we are free to use whichever model is more convenient (The Grand Design, p. 7).

Among the most controversial and mind-bending claims Pirsig makes is that the very laws of nature themselves exist only in the human mind. “Laws of nature are human inventions, like ghosts,” he writes. Pirsig even remarks that it makes no sense to think of the law of gravity existing before the universe, that it only came into existence when Isaac Newton thought of it. It’s an outrageous claim, but if one looks closely at what the laws of nature actually are, it’s not so crazy an argument as it first appears.

For all of the advances that science has made over the centuries, there remains a sharp division of views among philosophers and scientists on one very important issue: are the laws of nature actual causal powers responsible for the origins and continuance of the universe or are the laws of nature summary descriptions of causal patterns in nature? The distinction is an important one. In the former view, the laws of physics are pre-existing or eternal and possess god-like powers to create and shape the universe; in the latter view, the laws have no independent existence – we are simply finding causal patterns and regularities in nature that allow us to predict and we call these patterns “laws.”

One powerful argument in favor of the latter view is that most of the so-called “laws of nature,” contrary to the popular view, actually have exceptions – and sometimes the exceptions are large. That is because the laws are simplified models of real phenomena. The laws were cobbled together by scientists in order to strike a careful balance between the values of scope, predictive accuracy, and simplicity. Michael Scriven, a mathematician and philosopher at Claremont Graduate University, has noted that as a result of this balance of values, physical laws are actually approximations that apply only within a certain range. This point has also been made more recently by Ronald Giere, a professor of philosophy at the University of Minnesota, in Science Without Laws and Nancy Cartwright of the University of California at San Diego in How the Laws of Physics Lie.

Newton’s law of universal gravitation, for example, is not really universal. It becomes increasingly inaccurate under conditions of high gravity and very high velocities, and at the atomic level, gravity is completely swamped by other forces. Whether one uses Newton’s law depends on the specific conditions and the level of accuracy one requires. Newton’s laws of motion also have exceptions, depending on the force, distance, and speed. Kepler’s laws of planetary motion are an approximation based on the simplifying assumption of a planetary system consisting of one planet. The ideal gas law is an approximation which becomes inaccurate under conditions of low temperature and/or high pressure. The law of multiple proportions works for simple molecular compounds, but often fails for complex molecular compounds. Biologists have discovered so many exceptions to Mendel’s laws of genetics that some believe that Mendel’s laws should not even be considered laws.

So if we think of laws of nature as being pre-existing, eternal commandments, with god-like powers to shape the universe, how do we account for these exceptions to the laws? The standard response by scientists is that their laws are simplified depictions of the real laws. But if that is the case, why not state the “real” laws? Because by the time we wrote down the real laws, accounting for every possible exception, we would have an extremely lengthy and detailed description of causation that would not recognizably be a law. The whole point of the laws of nature was to develop tools by which one could predict a large number of phenomena (scope), maintain a good-enough correspondence to reality (accuracy), and make it possible to calculate predictions without spending an inordinate amount of time and effort (simplicity). That is why although Einstein’s conception of gravity and his “field equations” have supplanted Newton’s law of gravitation, physicists still use Newton’s “law” in most cases because it is simpler and easier to use; they only resort to Einstein’s complex equations when they have to! The laws of nature are human tools for understanding, not mathematical gods that shape the universe. The actual practice of science confirms Pirsig’s point that the symbolic and conceptual models that we create to understand reality have to be judged by how good they are – simple correspondence to reality is insufficient and in many cases is not even possible anyway.

_____________

 

Ultimately, Pirsig concluded, the scientific enterprise is not that different from the pursuit of other forms of knowledge – it is based on a search for the Good. Occasionally, you see this acknowledged explicitly, when mathematicians discuss the beauty of certain mathematical proofs or results, as defined by their originality, simplicity, ability to solve many problems at once, or their surprising nature. Scientists also sometimes write about the importance of elegance in their theories, defined as the ability to explain as much as possible, as clearly as possible, and as simply as possible. Depending on the field of study, the standards of judgment may be different, the tools may be different, and the scope of inquiry is different. But all forms of human knowledge — art, rhetoric, science, reason, and religion — originate in, and are dependent upon, a response to the Good or Quality. The difference between science and religion is that scientific models are more narrowly restricted to understanding how to predict and manipulate natural phenomena, whereas religious models address larger questions of meaning and value.

Pirsig did not ignore or suppress the failures of religious knowledge with regard to factual claims about nature and history. The traditional myths of creation and the stories of various prophets were contrary to what we know now about physics, biology, paleontology, and history. In addition, Pirsig was by no means a conventional theist — he apparently did not believe that God was a personal being who possessed the attributes of omniscience and omnipotence, controlling or potentially controlling everything in the universe.

However, Pirsig did believe that God was synonymous with the Good, or “Quality,” and was the source of all things.  In fact, Pirsig wrote that his concept of Quality was similar to the “Tao” (the “Way” or the “Path”) in the Chinese religion of Taoism. As such, Quality was the source of being and the center of existence. It was also an active, dynamic power, capable of bringing about higher and higher levels of being. The evolution of the universe, from simple physical forms, to complex chemical compounds, to biological organisms, to societies was Dynamic Quality in action. The most recent stage of evolution – Intellectual Quality – refers to the symbolic models that human beings create to understand the universe. They exist in the mind, but are a part of reality all the same – they represent a continuation of the growth of Quality.

What many religions were missing, in Pirsig’s view, was not objectivity, but dynamism: an ability to correct old errors and achieve new insights. The advantage of science was its willingness and ability to change. According to Pirsig,

If scientists had simply said Copernicus was right and Ptolemy was wrong without any willingness to further investigate the subject, then science would have simply become another minor religious creed. But scientific truth has always contained an overwhelming difference from theological truth: it is provisional. Science always contains an eraser, a mechanism whereby new Dynamic insight could wipe out old static patterns without destroying science itself. Thus science, unlike orthodox theology, has been capable of continuous, evolutionary growth. (Lila, p. 222)

The notion that religion and orthodoxy go together is widespread among believers and secularists. But there is no necessary connection between the two. All religions originate in social processes of story-telling, dialogue, and selective borrowing from other cultures. In fact, many religions begin as dangerous heresies before they become firmly established — orthodoxies come later. The problem with most contemporary understandings of religion is that one’s adherence to religion is often measured by one’s commitment to orthodoxy and membership in religious institutions rather than an honest quest for what is really good.  A person who insists on the literal truth of the Bible and goes to church more than once a week is perceived as being highly religious, whereas a person not connected with a church but who nevertheless seeks religious knowledge wherever he or she can find it is considered less committed or even secular.  This prejudice has led many young people to identify as “spiritual, not religious,” but religious knowledge is not inherently about unwavering loyalty to an institution or a text. Pirsig believed that mysticism was a necessary component of religious knowledge and a means of disrupting orthodoxies and recovering the dynamic aspect of religious insight.

There is no denying that the most prominent disputes between science and religion in the last several centuries regarding the physical workings of the universe have resulted in a clear triumph for scientific knowledge over religious knowledge.  But the solution to false religious beliefs is not to discard religious knowledge — religious knowledge still offers profound insights beyond the scope of science. That is why it is necessary to recover the dynamic nature of religious knowledge through mysticism, correction of old beliefs, and reform. As Pirsig argued, “Good is a noun.” Not because Good is a thing or an object, but because Good  is the center and foundation of all reality and all forms of knowledge, whether we are consciously aware of it or not.

A Defense of the Ancient Greek Pagan Religion

In a previous post on the topic of mythos and logos, I discussed the evolution of ancient Greek thought from its origins in imaginative legends about gods to the development of reason, philosophy, and logic. Today, every educated human being knows about the contributions of Socrates, Plato, Euclid, and Pythagoras. But the ancient Greek religion appears to us as an embarrassment, something to be passed over in silence or laughed at. Indeed, it is difficult to read about the enormous plethora of Greek gods and goddesses and the ludicrous stories about their various activities without wondering how Greek civilization ever managed to accomplish the great things it accomplished while it was so mired in superstition.

I am not going to defend ancient Greek superstition. But I will say this: Greek religion was much more than mere superstition — it was about devotion to a greater good. According to the German scholar Werner Jaeger,”Areté was the central ideal of all Greek culture.” (Paideia: The Ideal of Greek Culture, Vol. I, p. 15). The word areté means “excellence,” and although in early Greek history it referred primarily to the virtues of the warrior-hero, by the time of Homer areté referred more broadly to all types of excellence. Areté was rooted in the mythos of ancient Greece, in the epic poetry of Hesiod and Homer, with the more philosophical logos emerging later.

This devotion of the Greeks to a greater good was powerful, even fanatical. Religion was so absolutely central to Greek life, that this ancient pre-industrial civilization spent enormous sums of money on temples, statues, and religious festivals, at a time when long hours of hard physical labor were necessary simply to keep from starving. However, at the same time, Greek religion was remarkably loose and liberal in it’s set of beliefs — there was not a single accepted doctrine, a written set of rules, or even a single sacred text, similar to the Torah, Bible, or Quran. The Greeks freely created a plethora of gods and stories about the gods and revised the stories as they wished. But the Greeks did insist upon the fundamental reality of a greater good and complete devotion to it. I will argue that this devotion was responsible for the enormous contributions of ancient Greece, and that a completely secular, rational Greece would not have accomplished nearly as much.

In order to understand my defense of ancient Greek religion, I think it is important to recognize that there are different types of knowledge. There is knowledge of natural causation and knowledge of history; but there is also esthetic knowledge (knowledge of the beautiful); moral knowledge; and knowledge of the proper goals and ends of human life. Greek religion failed in understanding natural causation and history, but often succeeded in these latter forms of knowledge. Greek religion was never merely a set of statements about the origins and history of the universe and the operations of nature. Rather, Greek religion was characterized by a number of other qualities. Greek religion was experiential, symbolic, celebratory, practical, and teleological. Let’s look at each of these features more closely.

Experiential. In order to understand Greek religion — or any religion, actually — one has to do more than simply absorb a set of statements of belief. One has to experience the presence of a greater good.

athena_parthenon

statue-of-zeus-olympia

The first picture above is of a 40-feet tall statue of the Greek goddess Athena in a life-size recreation of the ancient Greek Parthenon in Nashville, Tennessee. The second picture is a depiction of the probable appearance of the statue of Zeus at the Temple of Zeus in the sanctuary of Olympia, Greece, the site of the Olympic games.

Contrary to popular belief, Greek statues were not all white, but often painted in vivid colors, and sometimes adorned with gold, ivory, and precious stones. The size and beauty of the temple statues was meant to convey grandeur, and that is precisely the effect that they had. The statue of Zeus at Olympia has been listed among the Seven Wonders of the Ancient World. A Roman general who once saw the statue of Zeus declared that he “was moved to his soul, as if he had seen the god in person.” The Greek orator and philosopher Dio Chrysostom declared that a single glimpse of the statue of Zeus would make a man forget all his earthly troubles.

Symbolic. When the Greeks created sculptures of their gods, they were not really aiming for an accurate depiction of what their gods “really” looked like. The gods were spirits or powers; the gods were responsible for creating forms, and could appear in any form they wished, but in themselves gods had no human form. Indeed, in one myth, Zeus was asked by a mortal to reveal his true form; but Zeus’s true form was a thunderbolt, so when Zeus appeared as a thunderbolt, he incinerated the unfortunate person. Rather than depict the gods “realistically,” Greek sculptors sought to depict the gods symbolically, as the most beautiful human forms imaginable, male or female. These are metaphorical or analogical depictions, using personification to represent the gods.

I am not going to argue that all Greek religion was metaphorical — clearly, most Greeks believed in the gods as real, actual personalities. But there was a strong metaphorical aspect to Greek religious thought, and it is often difficult even for scholars to tell what parts of Greek religion were metaphorical and what parts were literal. For example, we know that the Greeks actually worshiped certain virtues and desired goods, such as “Peace,” “Victory,” “Love,” “Democracy,” “Health,” “Order,” and “Wealth.” The Greeks used personal forms to represent these virtues, and created statues, temples, and alters dedicated to them, but they did not see the virtues as literal personalities. Some of this symbolic representation of virtues survives to this day: the blindfolded Lady Justice, the statue of Freedom on the top of the U.S. Capitol building, and the Statue of Liberty are several personifications widely recognized in modern America. Some scholars have suggested that the main Greek gods began as personifications (i.e., “Zeus” was the personification of the sky) but that over time the gods came to be seen as full-fledged personalities. However, the lack of written records from the early periods in Greek history make it impossible to confirm or refute this claim.

Celebratory. Religion is often seen as a strict and solemn affair, and although Greek religion had some of these aspects, there was a strong celebratory aspect to Greek religion. The Greeks not only wanted to thank the gods for life and food and drink and love, they wanted to demonstrate their thanks and celebrate through feasts, festivals, and holidays. Indeed, it is probably the case that the only time most Greeks ate meat was after a ritual sacrifice of cattle or other livestock at the altar of a god. (Greek Religion, ed. Daniel Ogden, p. 402) In ancient Athens, about half of the days in the calendar were devoted to religious festivals and each god or goddess often had more than one festival.  The most famous religious festival was the festival devoted to Zeus, held every four years at the sanctuary of Olympia. The Greeks visited the temple of Zeus and prayed to their god — but also held games, celebrated the victors, and enjoyed feasts. The Greeks also held festivals devoted to the god Dionysus, the god of wine and ecstasy. Drink, music, theater, and dancing played a central role in Dionysian festivals.

Practical. When I was doing research on Greek religion, I came across a fascinating discussion on how the Greeks performed animal sacrifice. Allegedly, when the animals were slaughtered, the Greeks were obligated to share a portion of the animal with the gods by burning it on the altar. However, when the Greeks butchered the animal, they reserved all the meat for themselves and sacrificed only the bones, covered with a deceptive layer of fat, for the gods. It’s hard not to be somewhat amused by this. Why would the powerful, all-knowing gods be satisfied with the useless, inedible portions of an animal, while the Greeks kept the best parts for themselves? The Greeks even had a myth to justify this practice: allegedly Prometheus fooled Zeus into accepting the bones and fat, and from that original act, all future sacrifices were similarly justified. As devoted to the gods as the Greeks were, they were also practical; in a primitive society, meat was a rare and expensive commodity for most. Sacrifice was a symbolic act of devotion to the gods, but the Greeks were not prepared to go hungry by sacrificing half of their precious meat.

And what of prayer to the gods? Clearly, the Greeks prayed to the gods and asked favors of them. But prayer never stopped or even slowed Greek achievements in art, architecture, athletics, philosophy, and mathematics. No Greek ever entered the Olympic games fat and out-of-shape, hoping that copious prayers and sacrifices to Zeus would help him win the games. No Greek ever believed that one did not have to train hard for war, that prayers to their deity would suffice to save their city from destruction at the hands of an enemy. Nor did the Greeks expect incompetent agriculture or engineering would be saved by prayer. The Greeks sought inspiration, strength, and assistance from the gods, but they did not believe that prayer would substitute for their personal shortcomings and neglect.

Teleological (goal-oriented). In a previous essay, I discussed the role of teleology — explanation in terms of goals or purpose — in accounting for causation. Although modern science has largely dismissed teleological causation in favor of efficient causation, I argued that teleological, or goal-oriented, causation could have a significant role in understanding (1) the long-term development of the universe and (2) the behavior of life forms. In a teleological perspective, human beings are not merely the end result of chemical or atomic mechanisms — humans are able to partially transcend the parts they are made of and work toward certain goals or ends that they choose.

We misunderstand Greek religion when we think of it as being merely a collection of primitive beliefs about natural causation that has been superseded by science. The gods were not merely causal agents of thunderstorms, earthquakes, and plagues. They were representations of areté , idealized forms of human perfection that inspired and guided the Greeks. In the pantheon of major Greek gods, only one (Poseidon) is associated solely with natural causation, being responsible for the seas and for earthquakes. Eight of the gods were associated primarily with human qualities, activities, and institutions — love, beauty, music, healing, music, war, hunting, wisdom, marriage, childbirth, travel, language, and the home. Three gods were associated with both natural causation and human qualities, Zeus being responsible for thunder and lightning, as well as law and justice. The Greeks also honored and worshiped mortal heroes, extraordinary persons who founded a city, overthrew a tyrant, or won a war. Inventors, poets, and athletes were worshiped as well, not because they had the powers of the gods, but because they were worthy of emulation and were sources of inspiration. (“Heroes and Hero Cults,” Greek Religion, ed. Daniel Ogden, pp. 100-14)

At this point, you may well ask, can’t we devote ourselves to the goal of excellence by using reason? There is no need to read about myths and appeal to invisible superbeings that do not exist in order to pursue excellence. This argument is partly true, but it must be pointed out that reason in itself is an insufficient guide to what goods we should be devoted to. Esthetics, imagination, and faith provide us with goals that reason by itself can’t provide. Reason is a superb tool for thinking, but it is not an all-purpose tool.

You can see the limitations of pure reason in modern, secular societies. People don’t really spend much time thinking about the greater goods they should pursue, so they fall into the trap of materialism. Religion is considered a private affair, so it is not taught in public schools, and philosophy is considered a waste of time. So people tend to borrow their life goals from their surrounding culture and peer groups; from advertisers on television and the Internet; and from movie stars and famous musicians. People end up worshiping money, technology, and celebrities; they know those things are “real” because they are material, tangible, and because their culture tells them these things are important. But this worship without religion is only a different form of irrationality and superstition. As “real” as material goods are, they only provide temporary satisfaction, and there is never an amount of money or a house big enough or a car fancy enough or a celebrity admirable enough to bring us lasting happiness.

What the early Greeks understood is that reason in itself is a weak tool for directing the passions — only passions, rightly-ordered, can rule other passions. The Greeks also knew that excellence and beauty were real, even if the symbolic forms used to represent these realities were imprecise and imperfect. Finally, the Greeks understood that faith had causal potency — not in the sense that prayers could prevent an earthquake or a plague, but in the sense that attaining the heights of human achievement was possible only by total and unwavering commitment to a greater good, reinforced by ritual and habit. For the Greeks, reality was a work-in-progress: it didn’t consist merely of static “things” but of human possibilities and potential, the ability to be more than ourselves, to be greater than ourselves. However we want to symbolize it, devotion to a greater good is the first step to realizing that good. When we skip the first step, devotion, we shouldn’t be surprised when we fail to attain it.

What Does Science Explain? Part 5 – The Ghostly Forms of Physics

The sciences do not try to explain, they hardly even try to interpret, they mainly make models. By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of such a mathematical construct is solely and precisely that it is expected to work — that is, correctly to describe phenomena from a reasonably wide area. Furthermore, it must satisfy certain esthetic criteria — that is, in relation to how much it describes, it must be rather simple. — John von Neumann (“Method in the Physical Sciences,” in The Unity of Knowledge, 1955)

Now we come to the final part of our series of posts, “What Does Science Explain?” (If you have not already, you can peruse parts 1, 2, 3, and 4 here). As I mentioned in my previous posts, the rise of modern science was accompanied by a change in humanity’s view of metaphysics, that is, our theory of existence. Medieval metaphysics, largely influenced by ancient philosophers, saw human beings as the center or summit of creation; furthermore, medieval metaphysics proposed a sophisticated, multifaceted view of causation. Modern scientists, however, rejected much of medieval metaphysics as subjective and saw reality as consisting mainly of objects impacting or influencing each other in mathematical patterns.  (See The Metaphysical Foundations of Modern Science by E.A. Burtt.)

I have already critically examined certain aspects of the metaphysics of modern science in parts 3 and 4. For part 5, I wish to look more closely at the role of Forms in causation — what Aristotle called “formal causation.” This theory of causation was strongly influenced by Aristotle’s predecessor Plato and his Theory of Forms. What is Plato’s “Theory of Forms”? In brief, Plato argued that the world we see around us — including all people, trees, and animals, stars, planets and other objects — is not the true reality. The world and the things in it are imperfect and perishable realizations of perfect forms that are eternal, and that continually give birth to the things we see. That is, forms are the eternal blueprints of perfection which the material world imperfectly represents. True philosophers do not focus on the material world as it is, but on the forms that material things imperfectly reflect. In order to judge a sculpture, painting, or natural setting, a person must have an inner sense of beauty. In order to evaluate the health of a particular human body, a doctor must have an idea of what a perfectly healthy human form is. In order to evaluate a government’s system of justice, a citizen must have an idea about what perfect justice would look like. In order to critically judge leaders, citizens must have a notion of the virtues that such a leader should have, such as wisdom, honesty, and courage.  Ultimately, according to Plato, a wise human being must learn and know the perfect forms behind the imperfect things we see: we must know the Form of Beauty, the Form of Justice, the Form of Wisdom, and the ultimate form, the Form of Goodness, from which all other forms flow.

Unsurprisingly, many intelligent people in the modern world regard Plato’s Theory of Forms as dubious or even outrageous. Modern science teaches us that sure knowledge can only be obtained by observation and testing of real things, but Plato tells us that our senses are deceptive, that the true reality is hidden behind what we sense. How can we possibly confirm that the forms are real? Even Plato’s student Aristotle had problems with the Theory of Forms and argued that while the forms were real, they did not really exist until they were manifested in material things.

However, there is one important sense in which modern science retained the notion of formal causation, and that is in mathematics. In other words, most scientists have rejected Plato’s Theory of Forms in all aspects except for Plato’s view of mathematics. “Mathematical Platonism,” as it is called, is the idea that mathematical forms are objectively real and are part of the intrinsic order of the universe. However, there are also sharp disagreements on this subject, with some mathematicians and scientists arguing that mathematical forms are actually creations of the human imagination.

The chief difference between Plato and modern scientists on the study of mathematics is this: According to Plato, the objects of geometry — perfect squares, perfect circles, perfect planes — existed nowhere in the material world; we only see imperfect realizations. But the truly wise studied the perfect, eternal forms of geometry rather than their imperfect realizations. Therefore, while astronomical observations indicated that planetary bodies orbited in imperfect circles, with some irregularities and errors, Plato argued that philosophers must study the perfect forms instead of the actual orbits! (The Republic, XXVI, 524D-530C) Modern science, on the other hand, is committed to observation and study of real orbits as well as the study of perfect mathematical forms.

Is it tenable to hold the belief that Plato and Aristotle’s view of eternal forms is mostly subjective nonsense, but they were absolutely right about mathematical forms being real? I argue that this selective borrowing of the ancient Greeks doesn’t quite work, that some of the questions and difficulties with proving the reality of Platonic forms also afflicts mathematical forms.

The main argument for mathematical Platonism is that mathematics is absolutely necessary for science: mathematics is the basis for the most important and valuable physical laws (which are usually in the form of equations), and everyone who accepts science must agree that the laws of nature or the laws of physics exist. However, the counterargument to this claim is that while mathematics is necessary for human beings to conduct science and understand reality, that does not mean that mathematical objects or even the laws of nature exist objectively, that is, outside of human minds.

I have discussed some of the mysterious qualities of the “laws of nature” in previous posts (here and here). It is worth pointing out that there remains a serious debate among philosophers as to whether the laws of nature are (a) descriptions of causal regularities which help us to predict or (b) causal forces in themselves. This is an important distinction that most people, including scientists, don’t notice, although the theoretical consequences are enormous. Physicist Kip Thorne writes that laws “force the Universe to behave the way it does.” But if laws have that kind of power, they must be ubiquitous (exist everywhere), eternal (exist prior to the universe), and have enormous powers although they have no detectable energy or mass — in other words, the laws of nature constitute some kind of supernatural spirit. On the other hand, if laws are summary descriptions of causation, these difficulties can be avoided — but then the issue arises: do the laws of nature or of physics really exist objectively, outside of human minds, or are they simply human-constructed statements about patterns of causation? There are good reasons to believe the latter is true.

The first thing that needs to be said is that nearly all these so-called laws of nature are actually approximations of what really happens in nature, approximations that work only under certain restrictive conditions. Both of these considerations must be taken into account, because even the approximations fall apart outside of certain pre-specified conditions. Newton’s law of universal gravitation, for example, is not really universal. It becomes increasingly inaccurate under conditions of high gravity and very high velocities, and at the atomic level, gravity is completely swamped by other forces. Whether one uses Newton’s law depends on the specific conditions and the level of accuracy one requires. Kepler’s laws of planetary motion are an approximation based on the simplifying assumption of a planetary system consisting of one planet. The ideal gas law is an approximation which becomes inaccurate under conditions of low temperature and/or high pressure. The law of multiple proportions works for simple molecular compounds, but often fails for complex molecular compounds. Biologists have discovered so many exceptions to Mendel’s laws of genetics that some believe that Mendel’s laws should not even be considered laws.

The fact of the matter is that even with the best laws that science has come up with, we still can’t predict the motions of more than two interacting astronomical bodies without making unrealistic simplifying assumptions. Michael Scriven, a mathematician and philosopher at Claremont Graduate University, has concluded that the laws of nature or physics are actually cobbled together by scientists based on multiple criteria:

Briefly we may say that typical physical laws express a relationship between quantities or a property of systems which is the simplest useful approximation to the true physical behavior and which appears to be theoretically tractable. “Simplest” is vague in many cases, but clear for the extreme cases which provide its only use. “Useful” is a function of accuracy and range and purpose. (Michael Scriven, “The Key Property of Physical Laws — Inaccuracy,” in Current Issues in the Philosophy of Science, ed. Herbert Feigl)

The response to this argument is that it doesn’t disprove the objective existence of physical laws — it simply means that the laws that scientists come up with are approximations to real, objectively existing underlying laws. But if that is the case, why don’t scientists simply state what the true laws are? Because the “laws” would actually end up being extremely long and complex statements of causation, with so many conditions and exceptions that they would not really be considered laws.

An additional counterargument to mathematical Platonism is that while mathematics is necessary for science, it is not necessary for the universe. This is another important distinction that many people overlook. Understanding how things work often requires mathematics, but that doesn’t mean the things in themselves require mathematics. The study of geometry has given us pi and the Pythagorean theorem, but a child does not need to know these things in order to draw a circle or a right triangle. Circles and right triangles can exist without anyone, including the universe, knowing the value of pi or the Pythagorean theorem. Calculus was invented in order to understand change and acceleration; but an asteroid, a bird, or a cheetah is perfectly capable of changing direction or accelerating without needing to know calculus.

Even among mathematicians and scientists, there is a significant minority who have argued that mathematical objects are actually creations of the human imagination, that math may be used to model aspects of reality, but it does not necessarily do so. Mathematicians Philip J. Davis and Reuben Hersh argue that mathematics is the study of “true facts about imaginary objects.” Derek Abbot, a professor of engineering, writes that engineers tend to reject mathematical Platonism: “the engineer is well acquainted with the art of approximation. An engineer is trained to be aware of the frailty of each model and its limits when it breaks down. . . . An engineer . . . has no difficulty in seeing that there is no such a thing as a perfect circle anywhere in the physical universe, and thus pi is merely a useful mental construct.” (“The Reasonable Ineffectiveness of Mathematics“) Einstein himself, making a distinction between mathematical objects used as models and pure mathematics, wrote that “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” Hartry Field, a philosopher at New York University, has argued that mathematics is a useful fiction that may not even be necessary for science. Field goes to show that it is possible to reconstruct Newton’s theory of gravity without using mathematics. (There is more discussion on this subject here and here.)

So what can we conclude about the existence of forms? I have to admit that although I’m skeptical, I have no sure conclusions. It seems unlikely that forms exist outside the mind . . . but I can’t prove they don’t exist either. Forms do seem to be necessary for human reasoning — no thinking human can do without them. And forms seem to be rooted in reality: perfect circles, perfect squares, and perfect human forms can be thought of as imaginative projections of things we see, unlike Sherlock Holmes or fire-breathing dragons or flying spaghetti monsters, which are more creatively fictitious. Perhaps one could reconcile these opposing views on forms by positing that the human mind and imagination is part of the universe itself, and that the universe is becoming increasingly consciously aware.

Another way to think about this issue was offered by Robert Pirsig in Zen and the Art of Motorcycle Maintenance. According to Pirsig, Plato made a mistake by positing Goodness as a form. Even considered as the highest form, Goodness (or “Quality,” in Pirsig’s terminology) can’t really be thought of as a static thing floating around in space or some otherworldly realm. Forms are conceptual creations of humans who are responding to Goodness (Quality). Goodness itself is not a form, because it is not an unchanging thing — it is not static or even definable. It is “reality itself, ever changing, ultimately unknowable in any kind of fixed, rigid way.” (p. 342) Once we let go of the idea that Goodness or Quality is a form, we can realize that not only is Goodness part of reality, it is reality.

As conceptual creations, ideal forms are found in both science and religion. So why, then, does there seem to be such a sharp split between science and religion as modes of knowledge? I think it comes down to this: science creates ideal forms in order to model and predict physical phenomena, while religion creates ideal forms in order to provide guidance on how we should live.

Scientists like to see how things work — they study the parts in order to understand how the wholes work. To increase their understanding, scientists may break down certain parts into smaller parts, and those parts into even smaller parts, until they come to the most fundamental, indivisible parts. Mathematics has been extremely useful in modeling and understanding these parts of nature, so scientists create and appreciate mathematical forms.

Religion, on the other hand, tends to focus on larger wholes. The imaginative element of religion envisions perfect states of being, whether it be the Garden of Eden or the Kingdom of Heaven, as well as perfect (or near perfect) humans who serve as prophets or guides to a better life. Religion is less concerned with how things work than with how things ought to work, how things ought to be. So religion will tend to focus on subjects not covered by science, including the nature and meaning of beauty, love, and justice. There will always be debates about the appropriateness of particular forms in particular circumstances, but the use of forms in both science and religion is essential to understanding the universe and our place in it.

What Does Science Explain? Part 4 – The Ends of the Universe

Continuing my series of posts on “What Does Science Explain?” (parts 1, 2 , and 3 here), I wish today to discuss the role of teleological causation. Aristotle referred to teleology in his discussion of four causes as “final causation,” because it referred to the goals or ends of all things (the Greek word “telos” meaning “goal,” “purpose,” or “end.”) From a teleological viewpoint, an acorn grows into an oak tree, a bird takes flight, and a sculptor creates statues because these are the inherent and intended ends of the acorn, bird, and sculptor. Medieval metaphysics granted a large role for teleological causation in its view of the universe.

According to E.A. Burtt in The Metaphysics of Modern Science, the growth of modern science changed the idea of causation, focusing almost exclusively on efficient causation (objects impacting or affecting other objects). The idea of final (goal-oriented) causation was dismissed. And even though the early modern scientists such as Galileo and Newton believed in God, their notion of God was significantly different from the traditional medieval conception of God. Rather than seeing God as the Supreme Good, which continually draws all things to higher levels of being, early modern scientists reduced God to the First Efficient Cause, who merely started the mechanism of the universe and then let it run.

It was not unreasonable for early scientists to focus on efficient causation rather than final causation. It was often difficult to come up with testable hypotheses and workable predictive models by assuming long-term goals in nature. There was always a strong element of mystery about what the true ends of nature were and it was very difficult to pin down these alleged goals. Descartes believed in God, but also wrote that it was impossible to know what God’s goals were. For that reason, it is quite likely that science in its early stages needed to overcome medieval metaphysics in order to make its first great discoveries about nature. Focusing on efficient causation was simpler and apt to bring quicker results.

However, now that science has advanced over the centuries, it is worth revisiting the notion of teleological causation as a means of filling in gaps in our current understanding of nature. It is true that the concept of long-term goals for physical objects and forces often does not help very much in terms of developing useful, short-term predictive models. But final causation can help make sense of long-term patterns which may not be apparent when making observations over short periods of time. Processes that look purposeless and random in the short-term may actually be purposive in the long-term. We know that an acorn under the right conditions will eventually become an oak tree, because the process and the outcome of development can be observed within a reasonable period of time and that knowledge has been passed on to us. If our knowledge base began at zero and we came across an acorn for the first time, we would find it extremely difficult to predict the long-term future of that acorn merely by cutting it up and examining it under a microscope.

So, does the universe have long-term, goal-oriented patterns that may be hidden among the short-term realities of contingency and randomness? A number of physicists began to speculate that this was the case in the late twentieth century, when their research indicated that the physical forces and constants of the universe can exist in only a very narrow range of possibilities in order for life to be possible, or even for the universe to exist. Change in even one of the forces or constants could make life impossible or cause the universe to self-destruct in a short period of time. In this view, the evolution of the universe and of life on earth has been subject to a great deal of randomness, but the cosmic structure and conditions that made evolution possible are not at all random. As the physicist Freeman Dyson has noted:

It is true that we emerged in the universe by chance, but the idea of chance is itself only a cover for our ignorance. . . . The more I examine the universe and study the details of its architecture, the more evidence I find that the universe in some sense must have known that we were coming. (Disturbing the Universe, p. 250)

In what way did the universe “know we were coming?” Consider the fact that in the early universe after the Big Bang, the only elements that existed were the “light” elements hydrogen and helium, along with trace amounts of lithium and beryllium. A universe with only four elements would certainly be simple, but there would not be much to build upon. Life, at least as we know it, requires not just hydrogen but at a minimum carbon, oxygen, nitrogen, phosphorus, and sulfur. How did these and other heavier elements come into being? Stars produced them, through the process of fusion. In fact, stars have been referred to as the “factories” of heavy elements. Human beings today consist primarily of oxygen, followed by carbon, hydrogen, nitrogen, calcium, and phosphorous. Additional elements compose less than one percent of the human body, but even most of these elements are essential to human life. Without the elements produced earlier by stars we would not be here. It has been aptly said that human beings are made of “stardust.”

So why did stars create the heavier elements? After all, the universe could have gotten along quite well without additional elements. Was it random chance that created the heavy elements? Not really. Random chance plays a role in many natural events, but the creation of heavy elements in stars requires some precise conditions — it is not just a churning jumble of subatomic particles. The astronomer Fred Hoyle was the first scientist to study how stars made heavy elements, and he noted that the creation of heavy elements required very specific values in order for the process to work. When he concluded his research Hoyle remarked, “A common sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature. The numbers one calculates from the facts seem to me so overwhelming as to put this conclusion almost beyond question.”

The creation of heavier elements by the stars does not necessarily mean that the universe intended specifically to create human beings, but it does seem to indicate that the universe somehow “knew” that heavy elements would be required to create higher forms of being, above and beyond the simple and primitive elements created by the Big Bang. In that sense, creating life is plausibly a long-term goal of the universe.

And what about life itself? Does it make sense to use teleology to study the behavior of life forms? Biologist Peter Corning has argued that while science has long pursued reductionist explanations of phenomena, it is impossible to really know biological systems without pursuing holistic explanations centered on the purposive behavior of organisms.

According to reductionism, all things can be explained by the parts that they are made of — human beings are made of tissues and organs, which are made of cells, which are made of chemical compounds, which are made of atoms, which are made of subatomic particles. In the view of many scientists, everything about human beings can in principle be explained by actions at the subatomic level. Peter Corning, however, argues that this conception is mistaken. Reductionism is necessary for partially explaining biological systems, but it is not sufficient. The reason for this is that the wholes are greater than the parts, and the behavior of wholes often has characteristics that are radically different from the parts that they are made of. For example, it would be dangerous to add pure hydrogen or oxygen to a fire, but when hydrogen atoms and oxygen atoms are combined in the right way — as H2O — one obtains a chemical compound that is quite useful for extinguishing fires. The characteristics of the molecule are different from the characteristics of the atoms in it. Likewise, at the subatomic level, particles may have no definite position in space and can even be said to exist in multiple places at once; but human beings only exist in one place at a time, despite the fact that human beings are made of subatomic particles. The behavior of the whole is different from the behavior of the parts. The transformation of properties that occurs when parts form new wholes is known as “emergence.”

Corning notes that when one incorporates analysis of wholes into theoretical explanation, there is goal-oriented “downward causation” as well as “upward causation.” For example, a bird seeks the goal of food and a favorable environment, so when it begins to get cold, that bird flies thousands of miles to a warmer location for the winter. The atoms that make up that bird obviously go along for the ride, but a scientist can’t use the properties of the atoms to predict the flight of these atoms; only by looking at the properties of the bird as a whole can a scientist predict what the atoms making up the bird are going to do. The bird as a whole doesn’t have complete control over the atoms composing its body, but it clearly has some control. Causation goes down as well as up. Likewise, neuropsychologist Roger Sperry has argued that human consciousness is a whole that influences the parts of the brain and body just as the parts of the brain and body influence the consciousness: “[W]e contend that conscious or mental phenomena are dynamic, emergent, pattern (or configurational) properties of the living brain in action . . . these emergent pattern properties in the brain have causal control potency. . . ” (“Mind, Brain, and Humanist Values,” Bulletin of the Atomic Scientists, Sept 1966) In Sperry’s view, the values created by the human mind influence human behavior as much as the atoms and chemicals in the human body and brain.

Science has traditionally viewed the evolution of the universe as upward causation only, with smaller parts joining into larger wholes as a result of the laws of nature and random chance. This view of causation is illustrated in the following diagram:

reductionism

But if we take seriously the notion of emergence and purposive action, we have a more complex picture, in which the laws of nature and random chance constrain purposive action and life forms, but do not entirely determine the actions of life forms — i.e., there is both upward and downward causation:

reductionism_and_holism

It is important to note that this new view of causation does not eliminate the laws of nature — it just sets limits on what the laws of nature can explain. Specifically, the laws of nature have their greatest predictive power when we are dealing with the simplest physical phenomena; the complex wholes that are formed by the evolutionary process are less predictable because they can to some extent work around the laws of nature by employing the new properties that emerge from the joining of parts. For example, it is relatively easy to predict the motion of objects in the solar system by using the laws of nature; it is not so easy to predict the motion of life forms because life forms have properties that go beyond the simple properties possessed by objects in the solar system. As Robert Pirsig notes in Lila, life can practically be defined by its ability to transcend or work around the static patterns of the laws of nature:

The law of gravity . . . is perhaps the most ruthlessly static pattern of order in the universe. So, correspondingly, there is no single living thing that does not thumb its nose at that law day in and day out. One could almost define life as the organized disobedience of the law of gravity. One could show that the degree to which an organism disobeys this law is a measure of its degree of evolution. Thus, while the single protozoa just barely get around on their cilia, earthworms manage to control their distance and direction, birds fly into the sky, and man goes all the way to the moon. (Lila (1991), p. 143.

Many scientists still resist the notion of teleological causation. But it could be argued that even scientists who vigorously deny that there is any purpose in the universe actually have an implicit teleology. Their teleology is simply the “laws of nature” themselves, and either the inner goal of all things is to follow those laws, or it is the goal of the laws to compel all things to follow their commands. Other implicit teleologies can be found in scientists’ assumptions that nature is inherently simple; that mathematics is the language of nature; or that all the particles and forces in the nature play some necessary role. According to physicist Paul Davies,

There is . . . an unstated but more or less universal feeling among physicists that everything that exists in nature must have a ‘place’ or a role as part of some wider scheme, that nature should not indulge in profligacy by manifesting gratuitous entities, that nature should not be arbitrary. Each facet of physical reality should link in with the others in a ‘natural’ and logical way. Thus, when the particle known as the muon was discovered in 1937, the physicist Isidor Rabi was astonished. ‘Who ordered that?’ he exclaimed. (Paul Davies, The Mind of God: The Scientific Basis for a Rational World, pp. 209-10.

Ultimately, however, one cannot fully discuss the goals or ends of the universe without exploring the notion of Ideal Forms — that is, a blueprint for all things to follow or aspire to. The subject of Ideal Forms will be discussed in my next post.

What Does Science Explain? Part 3 – The Mythos of Objectivity

In parts one and two of my series “What Does Science Explain?,” I contrasted the metaphysics of the medieval world with the metaphysics of modern science. The metaphysics of modern science, developed by Kepler, Galileo, Descartes, and Newton, asserted that the only true reality was mathematics and the shape, motion, and solidity of objects, all else being subjective sensations existing solely within the human mind. I pointed out that the new scientific view was valuable in developing excellent predictive models, but that scientists made a mistake in elevating a method into a metaphysics, and that the limitations of the metaphysics of modern science called for a rethinking of the modern scientific worldview. (See The Metaphysical Foundations of Modern Science by Edwin Arthur Burtt.)

Early scientists rejected the medieval worldview that saw human beings as the center and summit of creation, and this rejection was correct with regard to astronomical observations of the position and movement of the earth. But the complete rejection of medieval metaphysics with regard to the role of humanity in the universe led to a strange division between theory and practice in science that endures to this day. The value and prestige of science rests in good part on its technological achievements in improving human life. But technology has a two-sided nature, a destructive side as well as a creative side. Aspects of this destructive side include automatic weaponry, missiles, conventional explosives, nuclear weapons, biological weapons, dangerous methods of climate engineering, perhaps even a threat from artificial intelligence. Even granting the necessity of the tools of violence for deterrence and self-defense, there remains the question of whether this destructive technology is going too far and slipping out of our control. So far the benefits of good technology have outweighed the hazards of destructive technology, but what research guidance is offered to scientists when human beings are removed from their high place in the universe and human values are separated from the “real” world of impersonal objects?

Consider the following question: Why do medical scientists focus their research on the treatment and cure of illness in humans rather than the treatment and cure of illness in cockroaches or lizards? This may seem like a silly question, but there’s no purely objective, scientific reason to prefer one course of research over another; the metaphysics of modern science has already disregarded the medieval view that humans have a privileged status in the universe. One could respond by arguing that human beings have a common self-interest in advancing human health through medical research, and this self-interest is enough. But what is the scientific justification for the pursuit of self-interest, which is not objective anyway? Without a recognition of the superior value of human life, medical science has no research guidance.

Or consider this: right now, astronomers are developing and employing advanced technologies to detect other worlds in the galaxy that may have life. The question of life on other planets has long interested astronomers, but it was impossible with older technologies to adequately search for life. It would be safe to say that the discovery of life on another planet would be a landmark development in science, and the discovery of intelligent life on another planet would be an astonishing development. The first scientist who discovered a world with intelligent life would surely win awards and fame. And yet, we already have intelligent life on earth and the metaphysics of modern science devalues it. In practice, of course, most scientists do value human life; the point is, the metaphysics behind science doesn’t, leaving scientists at a loss for providing an intellectual justification for a research program that protects and advances human life.

A second limitation of modern science’s metaphysics, closely related to the first, is its disregard of certain human sensations in acquiring knowledge. Early scientists promoted the view that only the “primary qualities” of mathematics, shape, size, and motion were real, while the “secondary qualities” of color, taste, smell, and sound existed only in the mind. This distinction between primary and secondary qualities was criticized at the time by philosophers such as George Berkeley, a bishop of the Anglican Church. Berkeley argued that the distinction between primary and secondary qualities was false and that even size, shape, and motion were relative to the perceptions and judgment of observers. Berkeley also opposed Isaac Newton’s theory that space and time were absolute entities, arguing instead that these were ideas rooted in human sensations. But Berkeley was disregarded by scientists, largely because Newton offered predictive models of great value.

Three hundred years later, Isaac Newton’s models retain their great value and are still widely used — but it is worth noting that Berkeley’s metaphysics has actually proved superior in many respects to Newton’s metaphysics.

Consider the nature of mathematics. For many centuries mathematicians believed that mathematical objects were objectively real and certain and that Euclidean geometry was the one true geometry. However, the discovery of non-Euclidean geometries in the nineteenth century shook this assumption, and mathematicians had to reconcile themselves to the fact that it was possible to create multiple geometries of equal validity. There were differences between the geometries in terms of their simplicity and their ability to solve particular problems, but no one geometry was more “real” than the others.

If you think about it, this should not be surprising. The basic objects of geometry — points, lines, and planes — aren’t floating around in space waiting for you to take note of them. They are concepts, creations of the human brain. We may see particular objects that resemble points, lines, and planes, but space itself has no visible content; we have to add content to it.  And we have a choice in what content to use. It is possible to create a geometry in which all lines are straight or all lines are curved; in which some lines are parallel or no lines are parallel;  or in which lines are parallel over a finite distance but eventually meet at some infinitely great distance. It is also possible to create a geometry with axioms that assume no lines, only points; or a geometry that assumes “regions” rather than points. So the notion that mathematics is a “primary quality” that exists within objects independent of human minds is a myth. (For more on the imaginary qualities of mathematics, see my previous posts here and here.)

But aside from the discovery of multiple mathematical systems, what has really killed the artificial distinction between “primary qualities,” allegedly objective, and “secondary qualities,” allegedly subjective, is modern science itself, particularly in the findings of relativity theory and quantum mechanics.

According to relativity theory, there is no single, objectively real size, shape, or motion of objects — these qualities are all relative to an observer in a particular reference frame (say, at the same location on earth, in the same vehicle, or in the same rocket ship). Contrary to some excessive and simplistic views, relativity theory does NOT mean that any and all opinions are equally valid. In fact, all observers within the same reference frame should be seeing the same thing and their measurements should match. But observers in different reference frames may have radically different measurements of the size, shape, and motion of an object, and there is no one single reference frame that is privileged — they are all equally valid.

Consider the question of motion. How fast are you moving right now? Relative to your computer or chair, you are probably still. But the earth is rotating at 1040 miles per hour, so relative to an observer on the moon, you would be moving at that speed — adjusting for the fact that the moon is also orbiting around the earth at 2288 miles per hour. But also note that the earth is orbiting the sun at 66,000 miles per hour, our solar system is orbiting the galaxy at 52,000 miles per hour, and our galaxy is moving at 1,200,000 miles per hour; so from the standpoint of an observer in another galaxy you are moving at a fantastically fast speed in a series of crazy looping motions. Isaac Newton argued that there was an absolute position in space by which your true, objective speed could be measured. But Einstein dismissed that view, and the scientific consensus today is that Einstein was right — the answer to the question of how fast you are moving is relative to the location and speed of the observer.

The relativity of motion was anticipated by the aforementioned George Berkeley as early as the eighteenth century, in his Treatise Concerning the Principles of Human Knowledge (paragraphs 112-16). Berkeley’s work was subsequently read by the physicist Ernest Mach, who subsequently influenced Einstein.

Relativity theory also tells us that there is no absolute size and shape, that these also vary according to the frame of reference of an observer in relation to what is observed. An object moving at very fast speeds relative to an observer will be shortened in length, which also affects its shape. (See the examples here and here.) What is the “real” size and shape of the object? There is none — you have to specify the reference frame in order to get an answer. Professor Richard Wolfson, a physicist at Middlebury College who has a great lecture series on relativity theory, explains what happens at very fast speeds:

An example in which length contraction is important is the Stanford Linear Accelerator, which is 2 miles long as measured on Earth, but only about 3 feet long to the electrons moving down the accelerator at 0.9999995c [nearly the speed of light]. . . . [Is] the length of the Stanford Linear Accelerator ‘really’ 2 miles? No! To claim so is to give special status to one frame of reference, and that is precisely what relativity precludes. (Course Guidebook to Einstein’s Relativity and the Quantum Revolution, Lecture 10.)

In fact, from the perspective of a light particle (a photon), there is infinite length contraction — there is no distance and the entire universe looks like a point!

The final nail in the coffin of the metaphysics of modern science is surely the weird world of quantum physics. According to quantum physics, particles at the subatomic level do not occupy only one position at a particular moment of time but can exist in multiple positions at the same time — only when the subatomic particles are observed do the various possibilities “collapse” into a single outcome. This oddity led to the paradoxical thought experiment known as “Schrodinger’s Cat” (video here). The importance of the “observer effect” to modern physics is so great that some physicists, such as the late physicist John Wheeler, believed that human observation actually plays a role in shaping the very reality of the universe! Stephen Hawking holds a similar view, arguing that our observation “collapses” multiple possibilities into a single history of the universe: “We create history by our observation, rather than history creating us.” (See The Grand Design, pp. 82-83, 139-41.) There are serious disputes among scientists about whether uncertainties at the subatomic level really justify the multiverse theories of Wheeler and Hawking, but that is another story.

Nevertheless, despite the obsolescence of the metaphysical premises of modern science, when scientists talk about the methods of science, they still distinguish between the reality of objects and the unreality of what exists in the mind, and emphasize the importance of being objective at all times. Why is that? Why do scientists still use a metaphysics developed centuries ago by Kepler, Galileo, and Newton? I think this practice persists largely because the growth of knowledge since these early thinkers has led to overspecialization — if one is interested in science, one pursues a degree in chemistry, biology, or physics; if one is interested in metaphysics, one pursues a degree in philosophy. Scientists generally aren’t interested in or can’t understand what philosophers have to say, and philosophers have the same view of scientists. So science carries on with a metaphysics that is hundreds of years old and obsolete.

It’s true that the idea of objectivity was developed in response to the very real problem of the uncertainty of human sense impressions and the fallibility of the conclusions our minds draw in response to those sense impressions. Sometimes we think we see something, but we don’t. People make mistakes, they may see mirages; in extreme cases, they may hallucinate. Or we see the same thing but have different interpretations. Early scientists tried to solve this problem by separating human senses and the human mind from the “real” world of objects. But this view was philosophically dubious to begin with and has been refuted by science itself. So how do we resolve the problem of mistaken and differing perceptions and interpretations?

Well, we supplement our limited senses and minds with the senses and minds of other human beings. We gather together, we learn what others have perceived and concluded, we engage in dialogue and debate, we conduct repeated observations and check our results with the results of others. If we come to an agreement, then we have a tentative conclusion; if we don’t agree, more observation, testing, and dialogue is required to develop a picture that resolves the competing claims. In some cases we may simply end up with an explanation that accounts for why we come up with different conclusions — perhaps we are in different locations, moving at different speeds, or there is something about our sensory apparatus that causes us to sense differently. (There is an extensive literature in science about why people see colors differently due to the nature of the eye and brain.)

Central to the whole process of science is a common effort — but there is also the necessity of subduing one’s ego, acknowledging that not only are there other people smarter than we are, but that the collective efforts of even less-smart people are greater than our own individual efforts. Subduing one’s ego is also required in order to prepare for the necessity of changing one’s mind in response to new evidence and arguments. Ultimately, the search for knowledge is a social and moral enterprise. But we are not going to succeed in that endeavor by positing a reality separate from human beings and composed only of objects. (Next: Part 4)

What Does Science Explain? Part 2 – The Metaphysics of Modern Science

In my previous post, I discussed the nature of metaphysics, a theory of being and existence, in the medieval world. The metaphysics of the medieval period was strongly influenced by the ancient Greeks, particularly Aristotle, who posited four causes or explanations for why things were. In addition, Aristotle argued that existence could be understood as the result of a transition from “potentiality” to “actuality.” With the rise of modern science, argued Edwin Arthur Burtt in The Metaphysical Foundations of Modern Science, the medieval conception of existence changed. Although some of this change was beneficial, argued Burtt, there was also a loss.

The first major change that modern science brought about was the strict separation of human beings, along with human senses and desires, from the “real” universe of impersonal objects joining, separating, and colliding with each other. Rather than seeing human beings as the center or summit of creation, as the medievals did, modern scientists removed the privileged position of human beings and promoted the goal of “objectivity” in their studies, arguing that we needed to dismiss all subjective human sensations and look at objects as they were in themselves. Kepler, Galileo, and Newton made a sharp distinction between the “primary qualities” of objects and “secondary qualities,” arguing that only primary qualities were truly real, and therefore worth studying. What were the “primary qualities?”: quantity/mathematics, motion, shape, and solidity. These qualities existed within objects and were independent of human perception and sensation. The “secondary qualities” were color, taste, smell, and sound; these were subjective because they were derived from human sensations, and therefore did not provide objective facts that could advance knowledge.

The second major change that modern science brought to metaphysics was a dismissal of the medieval world’s rich and multifaceted concept of causation in favor of a focus on “efficient causation” (the impact of one object or event on another). The concept of “final causation,” that is, goal-oriented development, was neglected. In addition, the concept of “formal causation,” that is, the emergence of things out of universal forms, was reduced to mathematics; only mathematical forms expressed in the “laws of nature,” were truly real, according to the new scientific worldview. Thus, all causation was reduced to mathematical “laws of nature” directing the motion and interaction of objects.

The consequences of this new worldview were tremendous in terms of altering humanity’s conception of reality and what it meant to explain reality. According to Burtt, “From now on, it is a settled assumption for modern thought in practically every field, that to explain anything is to reduce it to its elementary parts, whose relations, where temporal in character, are conceived in terms of efficient causality solely.” (Metaphysics of Modern Science, p. 134) And although the early giants of science — Kepler, Galileo, and Newton — believed in God, their conception of God was significantly different from the medieval view. Rather than seeing God as the Supreme Good, the goal or end which continually brought all things from potentiality to actuality, they saw God in terms of the “First Efficient Cause” only. That is, God brought the laws of nature into existence, and then the universe operated like a clock or machine, which might then only occasionally need rewinding or maintenance. But once this conception of God became widespread, it was not long before people questioned whether God was necessary at all to explain the universe.

Inarguably, there were great advantages to the metaphysical views of early scientists. By focusing on mathematical models and efficient causes, while pruning away many of the non-calculable qualities of natural phenomena, scientists were able to develop excellent predictive models. Descartes gave up the study of “final causes” and focused his energies on mathematics because he felt no one could discern God’s purposes, a view adopted widely by subsequent scientists. Both Galileo and Newton put great emphasis on the importance of observation and experimentation in the study of nature, which in many cases put an end to abstract philosophical speculations on natural phenomena that gave no definite conclusions. And Newton gave precise meanings to previously vague terms like “force” and “mass,” meanings that allowed measurement and calculation.

The mistake that these early scientists made, however, was to elevate a method into a metaphysics, by proclaiming that what they studied was the only true reality, with all else existing solely in the human mind. According to Burtt,

[T]he great Newton’s authority was squarely behind that view of the cosmos which saw in man a puny, irrelevant spectator . . . of the vast mathematical system whose regular motions according to mechanical principles constituted the world of nature. . . . The world that people had thought themselves living in — a world rich with colour and sound, redolent with fragrance, filled with gladness, love and beauty, speaking everywhere of purposive harmony and creative ideals — was crowded now into minute corners in the brains of scattered organic beings. The really important world outside was a world hard, cold, colourless, silent, and dead; a world of quantity, a world of mathematically computable motions in mechanical regularity.  (pp. 238-9)

Even at the time this new scientific metaphysics was being developed, it was critiqued on various grounds by philosophers such as Leibniz, Hume, and Berkeley. These philosophers’ critiques had little long-term impact, probably because scientists offered working predictive models and philosophers did not. But today, even as science is promising an eventual “theory of everything,” the limitations of the metaphysics of modern science is causing even some scientists to rethink the whole issue of causation and the role of human sensations in developing knowledge. The necessity for rethinking the modern scientific view of metaphysics will be the subject of my next post.

What Does Science Explain? Part 1 – What is Causation?

In previous posts, I have argued that science has been excellent at creating predictive models of natural phenomena. From the origins of the universe, to the evolution of life, to chemical reactions, and the building of technological devices, scientists have learned to predict causal sequences and manipulate these causal sequences for the benefit (or occasionally, detriment) of humankind. These models have been stupendous achievements of civilization, and religious texts and institutions simply cannot compete in terms of offering predictive models.

There remains the issue, however, of whether the predictive models of science really explain all that there is to explain. While many are inclined to believe that the models of science explain everything, or at least everything that one needs to know, there are actually some serious disputes even among scientists about what causation is, what a valid explanation is, whether predictive models need to be realistic, and how real are some of the entities scientists study, such as the “laws of nature” and the mathematics that are often part of those laws.

The fundamental issues of causation, explanation, and reality are discussed in detail in a book published in 1954 entitled: The Metaphysical Foundations of Modern Science by Edwin Arthur Burtt. According to Burtt, the birth and growth of modern science came with the development of a new metaphysics, that is, the study of being and existence. Copernicus, Kepler, Galileo, and Newton all played a role in creating this new metaphysics, and it shapes how we view the world to this day.

In order to understand Burtt’s thesis, we need to back up a bit and briefly discuss the state of metaphysics before modern science — that is, medieval metaphysics. The medieval view of the world in the West was based largely on Christianity and the ancient Greek philosophers such as Aristotle, who wrote treatises on both physics and metaphysics.

Aristotle wrote that there were four types of answers to the question “why?” These answers were described by Aristotle as the “four causes,” though it has been argued that the correct translation of the Greek word that Aristotle used is “explanation” rather than “cause.” These are:

(1) Material cause

(2) Formal cause

(3) Efficient (or moving) cause

(4) Final cause

“Material cause” refers to changes that take place as a result of the material that something is made of. If a substance melts at a particular temperature, one can argue that it is the material nature of that substance that causes it to melt at that temperature. (The problem with this kind of explanation is that it is not very deep — one can then ask why a material behaves as it does.)

“Formal cause” refers to the changes that take place in matter because of the form that an object is destined to have. According to Aristotle, all objects share the same matter — it is the arrangement of matter into their proper forms that causes matter to become a rock, a tree, a bird, or a human being. Objects and living things eventually disintegrate and perish, but the forms are eternal, and they shape matter into new objects and living things that replace the old. The idea of formal causation is rooted in Plato’s theory of forms, though Aristotle modified Plato’s theory in a number of ways.

“Efficient cause” refers to the change that takes place when one object impacts another; one object or event is the cause, the other is the effect. A stick hitting a ball, a saw cutting wood, and hydrogen atoms interacting with oxygen atoms to create water are all examples of efficient causes.

“Final cause” refers to the goal, end, or purpose of a thing — the Greek word for goal is “telos.” An acorn grows into an oak tree because that is the goal or telos of an acorn. Likewise, a fertilized human ovum becomes a human being. In nature, birds fly, rain nourishes plants, and the moon orbits the earth, because nature has intended certain ends for certain things. The concept of a “final cause” is intimately related to the “formal cause,” in the sense that the forms tend to provide the ends that matter pursues.

Related to these four causes or explanations is Aristotle’s notion of potentiality and actuality. Before things come into existence, one can say that there is potential; when these things come into existence they are actualized. Hydrogen atoms and oxygen atoms have the potential to become water if they are joined in the right way, but until they are so joined, there is only potential water, not actual water. A block of marble has the potential to become a statue, but it is not actually a statue until a sculptor completes his or her work. A human being is potentially wise if he or she pursues knowledge, but until that pursuit of knowledge is carried out, there is only potentiality and not actuality. The forms and telos of nature are primarily responsible for the transformation of potentiality into actuality.

Two other aspects of the medieval view of metaphysics are worth noting. First, for the medievals, human beings were the center of the universe, the highest end of nature. Stars, planets, trees, animals, chemicals, were lower forms of being than humans and existed for the benefit of humans. Second, God was not merely the first cause of the universe — God was the Supreme Good, the goal or telos to which all creation was drawn in pursuit of its final goals and perfection. According to Burtt,

When medieval philosophers thought of what we call the temporal process it was this continuous transformation of potentiality into actuality that they had in mind. . . . God was the One who eternally exists, and ever draws into movement by his perfect beauty all that is potentially the bearer of a higher existence. He is the divine harmony of all goods, conceived as now realized in ideal activity, eternally present, himself unmoved, yet the mover of all change. (Burtt, The Metaphysical Foundations of Modern Science, pp. 94-5)

The rise of modern science, according to Burtt, led to a radical change in humanity’s metaphysical views. A great deal of this change was beneficial, in the sense that it led to predictive models that successfully answered certain questions about natural processes that were previously mysterious. However, as Burtt noted, the new metaphysics of science was also a straitjacket that constricted humanity’s pursuit of knowledge. Some human senses were unjustifiably dismissed as unreliable or deceptive and some types of causation were swept away unnecessarily. How modern science created a new metaphysics that changed humanity’s conception of reality will be discussed in part two.