Scientific Evidence for the Benefits of Faith

Increasingly, scientific studies have recognized the power of positive expectations in the treatment of people who are suffering from various illnesses. The so-called “placebo” effect is so powerful that studies generally try to control for it: fake pills, fake injections, or sometimes even fake surgeries will be given to one group while another group is offered the “real” treatment. If the real drug or surgery is no better than the fake drug/surgery, then the treatment is considered a failure. What has not been recognized until relatively recently is how the power of positive expectations should be considered as a form of treatment in itself.

Recently, Harvard University has established a Program in Placebo Studies and the Therapeutic Encounter in order to study this very issue. For many scientists, the power of the placebo has been a scandal and an embarrassment, and the idea of offering a “fake” treatment to a patient seems to go against every ethical and professional principle. But the attitude of Ted Kaptchuk, head of the Harvard program, is that if something works, it’s worth studying, no matter how crazy and irrational it seems.

In fact, “crazy” and “irrational” seem to be apt words to describe the results of research on placebos. Researchers have found differences in the effectiveness of placebos based merely on appearance — large pills are more effective than small pills; two pills are better than one pill; “brand name” pills are more effective than generics; capsules are better than pills; and injections are the most effective of all! Even the color of pills affects the outcome. One study found that the most famous anti-anxiety medication in the world, Valium, has no measurable effect on a person’s anxiety unless the person knows he or she is taking it (see “The Power of Nothing” in the Dec. 12 2011 New Yorker). The placebo is probably the oldest and simplest form of “faith healing” there is.

There are scientists who are critical of many of these placebo studies; they believe the power of placebos has been greatly exaggerated. Several studies have concluded that the placebo effect is small or insignificant, especially when objective measures of patient improvement are used instead of subjective self-reports.

However, it should be noted that the placebo effect is not simply a matter of patient feelings that are impossible to measure accurately — there is actually scientific evidence that the human brain manufactures chemicals in response to positive expectations. In the 1970s, it was discovered that people who reported a reduction in pain in response to a placebo were actually producing greater amounts of endorphins, a substance in the brain chemically similar to morphine and heroin that reduces pain and is capable of producing feelings of euphoria (as in the “runner’s high“). Increasingly, studies of the placebo effect have relied on brain scans to actually track changes in the brain in response to a patient receiving a placebo, so measurement of effects is not merely a matter of relying on what a person says. One recent study found that patients suffering from Parkinson’s disease responded better to an “expensive” placebo than a “cheaper” placebo. Patients were given injections containing nothing but saline water, but the arm of patients that was told the saline solution cost $1500 per dose experienced significantly better improvements in motor function than patients that were given a “cheaper” placebo! This happens because the placebo effect boosts the brain’s production of dopamine, which counteracts the effects of Parkinson’s disease. Brain scans have confirmed greater dopamine activation in the brains of those given placebos.

Other studies have confirmed the close relation between the health of the human mind and the health of the body. Excessive stress weakens the immune system, creating an opening for illness. People who regularly practice meditation, on the other hand, can strengthen their immune system and as result, catch colds and the flu less often. The health effects of mediation do not depend on the religion of those practicing it — Buddhist, Christian, Sikh. The mere act of meditation is what it important.

Why has modern medicine been so slow and reluctant to acknowledge the power of positive expectations and spirituality in improving human health? I think it’s because modern science has been based on certain metaphysical assumptions about nature which have been very valuable in advancing knowledge historically, but are ultimately limited and flawed. These assumptions are: (1) Anything that exists solely in the human mind is not real; (2) Knowledge must be based on what exists objectively, that is, what exists outside the mind; and (3) everything in nature is based on material causation — impersonal objects colliding with or forming bonds with other impersonal objects. In many respects, these metaphysical assumptions were valuable in overcoming centuries of wrong beliefs and superstitions. Scientists learned to observe nature in a disinterested fashion, discover how nature actually was and not how we wanted it to be. Old myths about gods and personal spirits shaping nature became obsolete, to be replaced by theories of material causation, which led to technological advances that brought the human race enormous benefits.

The problem with these metaphysical assumptions, however, is that they draw too sharp a separation between the human mind and what exists outside the mind. The human mind is part of reality, embedded in reality. Scientists rely on concepts created by the human mind to understand reality, and multiple, contradictory concepts and theories may be needed to understand reality.  (See here and here). And the human mind can modify reality – it is not just a passive spectator. The mind affects the body directly because it is directly connected to the body. But the mind can also affect reality by directing the limbs to perform certain tasks — construct a house, create a computer, or build a spaceship.

So if the human mind can shape the reality of the body through positive expectations, can positive expectations bring additional benefits, beyond health? According to the American philosopher William James in his essay “The Will to Believe,” a leap of faith could be justified in certain restricted circumstances: when a momentous decision must be made, there is a large element of uncertainty, and there are not enough resources and time to reduce the uncertainty. (See this post.) In James’ view, in some cases, we must take the risk of supposing something is true, lest we lose the opportunity of gaining something beneficial. In short, “Faith in a fact can help create that fact.”

Scientific research on how expectations affect human performance tends to support James’ claim. Performance in sports is often influenced by athletes’ expectations of “good luck.” People who are optimistic and visualize their ideal goals are more likely to actually attain their goals than people who don’t. One recent study found that human performance in a color discrimination task is better when the subjects are provided a lamp that has a label touting environmental friendliness. Telling people about stereotypes before crucial tests affects how well people perform on tests — Asians who are told about how good Asians are at math perform better on math tests; women who are sent the message that women are not as smart perform less well on tests. When golfers are told that winning golf is a matter of intelligence, white golfers improve their performance; when golfers are told that golf is a matter of natural athleticism, blacks do better.

Now, I am not about to tell you that faith is good in all circumstances and that you should always have faith. Applied across the board, faith can hurt you or even kill you. Relying solely on faith is not likely to cure cancer or other serious illnesses. Worshipers in some Pentecostal churches who handle poisonous snakes sometimes die from snake bites. And terrorists who think they will be rewarded in the afterlife for killing innocent people are truly deluded.

So what is the proper scope for faith? When should it be used and when should it not be used? Here are three rules:

First, faith must be restricted to the zone of uncertainty that always exists when evaluating facts. One can have faith in things that are unknown or not fully known, but one should not have faith in things that are contrary to facts that have been well-established by empirical research. One cannot simply say that one’s faith forbids belief in the scientific findings on evolution and the big bang, or that faith requires that one’s holy text is infallible in all matters of history, morals, and science.

Second, the benefits of faith cannot be used as evidence for belief in certain facts. A person who finds relief from Parkinson’s disease by imagining the healing powers of Christ’s love cannot argue that this proves that Jesus was truly the son of God, that Jesus could perform miracles, was crucified, and rose from the dead. These are factual claims that may or may not be historically accurate. Likewise with the golden plates of Joseph Smith that were allegedly the basis for the Book of Mormon or the ascent of the prophet Muhammad to heaven — faith does not prove any of these alleged facts. If there was evidence that one particular religious belief tended to heal people much better than other religious beliefs, then one might devote effort to examining if the facts of that religion were true. But there does not seem to be a difference among faiths — just about any faith, even the simplest faith in a mere sugar pill, seems to work.

Finally, faith should not run unnecessary risks. Faith is a supplement to reason, research, and science, not an alternative. Science, including medical science, works. If you get sick, you should go to a doctor first, then rely on faith. As the prophet Muhammad said, “Tie your camel first, then put your trust in Allah.”

Scientific Revolutions and Relativism

Recently, Facebook CEO Mark Zuckerberg chose Thomas Kuhn’s classic The Structure of Scientific Revolutions for his book discussion group. And although I don’t usually try to update this blog with the most recent controversy of the day, this time I can’t resist jumping on the Internet bandwagon and delving into this difficult, challenging book.

To briefly summarize, Kuhn disputes the traditional notion of science as one of cumulative growth, in which Galileo and Kepler build upon Copernicus, Newton builds upon Galileo and Kepler, and Einstein builds upon Newton. This picture of cumulative growth may be accurate for periods of “normal science,” Kuhn writes, when the community of scientists are working from the same general picture of the universe. But there are periods when the common picture of the universe (which Kuhn refers to as a “paradigm”) undergoes a revolutionary change. A radically new picture of the universe emerges in the community of scientists, old words and concepts obtain new meanings, and scientific consensus is challenged by conflict between traditionalists and adherents of the new paradigm. If the new paradigm is generally successful in solving new puzzles AND solving older puzzles that the previous paradigm solved, the community of scientists gradually moves to accept the new paradigm — though this often requires that stubborn traditionalists eventually die off.

According to Kuhn, science as a whole progressed cumulatively in the sense that science became better and better at solving puzzles and predicting things, such as the motions of the planets and stars. But the notion that scientific progress was bringing us closer and closer to the Truth, was in Kuhn’s view highly problematic. He felt there was no theory-independent way of saying what was really “out there” — conceptions of reality were inextricably linked to the human mind and its methods of perceiving, selecting, and organizing information. Rather than seeing science as evolving closer and closer to an ultimate goal, Kuhn made an analogy to biological evolution, noting that life evolves into higher forms, but there is no evidence of a final goal toward which life is heading. According to Kuhn,

I do not doubt, for example, that Newton’s mechanics improves on Aristotle’s and that Einstein’s improves on Newton’s as instruments for puzzle-solving. But I can see in their succession no coherent direction of ontological development. On the contrary, in some important respects, though by no means all, Einstein’s general theory of relativity is closer to Aristotle’s than either of them is to Newton’s. (Structure of Scientific Revolutions, postscript, pp. 206-7.)

This claim has bothered many. In the view of Kuhn’s critics, if a theory solves more puzzles, predicts more phenomena to a greater degree of accuracy, the theory must be a more accurate picture of reality, bringing us closer and closer to the Truth. This is a “common sense” conclusion that would seem to be irrefutable. One writer in Scientific American comments on Kuhn’s appeal to “relativists,” and argues:

Kuhn’s insight forced him to take the untenable position that because all scientific theories fall short of absolute, mystical truth, they are all equally untrue. Because we cannot discover The Answer, we cannot find any answers. His mysticism led him to a position as absurd as that of the literary sophists who argue that all texts — from The Tempest to an ad for a new brand of vodka — are equally meaningless, or meaningful. (“What Thomas Kuhn Really Thought About Scientific ‘Truth’“)

Many others have also charged Kuhn with relativism, so it is important to take some time to examine this charge.

What people seem to have a hard time grasping is what scientific theories actually accomplish. Scientific theories or models can in fact be very good at solving puzzles or predicting outcomes without being an accurate reflection of reality — in fact, in many cases theories have to be unrealistic in order to be useful! Why? A theory must accomplish several goals, but some of these goals are incompatible, requiring a tradeoff of values. For example, the best theories generalize as much as possible, but since there are exceptions to almost every generalization, there is a tradeoff between generalizability and accuracy. As Nancy Cartwright and Ronald Giere have pointed out, the “laws of physics” have many exceptions when matched to actual phenomena; but we cherish the laws of physics because of their wide scope: they subsume millions of observations under a small number of general principles, even though specific cases usually don’t exactly match the predictions of any one law.

There is also a tradeoff between accuracy and simplicity. Complete accuracy in many cases may require dozens of complex calculations; but most of the time, complete accuracy is not required, so scientists go with the simplest possible principles and calculations. For example, when dealing with gravity, Newton’s theory is much simpler than Einstein’s, so scientists use Newton’s equations until circumstances require them to use Einstein’s equations. (For more on theoretical flexibility, see this post.)

Finally, there is a tradeoff between explanation and prediction. Many people assume that explanation and prediction are two sides of the same coin, but in fact it is not only possible to predict outcomes without having a good causal model, sometimes focusing on causation gets in the way of developing a good predictive model. Why? Sometimes it’s difficult to observe or measure causal variables, so you build your model using variables that are observable and measurable even if those variables are merely associated with certain outcomes and may not cause those outcomes. To choose a very simple example, a model that posits that a rooster crowing leads to the rising of the sun can be a very good predictive model while saying nothing about causation. And there are actually many examples of this in contemporary scientific practice. Scientists working for the Netflix corporation on improving the prediction of customers’ movie preferences have built a highly valuable predictive model using associations between certain data points, even though they don’t have a true causal model. (See Galit Shmueli, “To Explain or Predict” in Statistical Science, 2010, vol. 25, no. 3)

Not only is there no single, correct way to make these value tradeoffs, it is often the case that one can end up with multiple, incompatible theories that deal with the same phenomena, and there is no obvious choice as to which theory is best. As Kuhn has pointed out, new theories become widely accepted among the community of scientists only when the new theory can account for anomalies in the old theory AND yet also conserve at least most of the predictions of the old theory. Even so, it is not long before even newer theories come along that also seem to account for the same phenomena equally well. Is it relativism to recognize this fact? Not really. Does the reality of multiple, incompatible theories mean that every person’s opinion is equally valid? No. There are still firm standards in science. But there can be more than one answer to a problem. The square root of 1,000,000 can be 1000 or -1000. That doesn’t mean that any answer to the square root of 1,000,000 is valid!

Physicist Stephen Hawking and philosopher Ronald Giere have made the analogy between scientific theories and maps. A map is an attempt to reduce a very large, approximately spherical, three dimensional object — the earth — to a flat surface. There is no single correct way to make a map, and all maps involve some level of inaccuracy and distortion. If you want accurate distances, the areas of the land masses will be inaccurate, and vice versa. With a small scale, you can depict large areas but lose detail. If you want to depict great detail, you will have to make a map with a larger scale. If you want to depict all geographic features, your map may become so cluttered with detail it is not useful, so you have to choose which details are important — roads, rivers, trees, buildings, elevation, agricultural areas, etc. North can be “up” on your map, but it does not have to be. In fact, it’s possible to make an infinite number of valid maps, as long as they are useful for some purpose. That does not mean that anyone can make a good map, that there are no standards. Making good maps requires knowledge and great skill.

As I noted above, physicists tend to prefer Newton’s theory of gravity rather than Einstein’s to predict the motion of celestial objects because it is simpler. There’s nothing wrong with this, but it is worth pointing out that Einstein’s picture of gravity is completely different from Newton’s. In Newton’s view, space and time are separate, absolute entities, space is flat, and gravity is a force that pulls objects away from the straight lines that the law of inertia would normally make them follow. In Einstein’s view, space and time are combined into one entity, spacetime, space and time are relative, not absolute, spacetime is curved in the presence of mass, and when objects orbit a planet it is not because the force of gravity is overcoming inertia (gravity is in fact a “fictitious force“), but because objects are obeying the law of inertia by following the curved paths of spacetime! In terms of prediction, Einstein’s view of gravity offers an incremental improvement to Newton’s, but Einstein’s picture of gravity is so radically different, Kuhn was right in seeing Einstein’s theory as a revolution. But scientists continue to use Newton’s theory, because it mostly retains the value of prediction while excelling in the value of simplicity.

Stephen Hawking explains why science is not likely to progress to a single, “correct” picture of the universe:

[O]our brains interpret the input from our sensory organs by making a model of the world. When such a model is successful at explaining events, we tend to attribute to it, and the elements and concepts that constitute it, the quality of reality or absolute truth. But there may be different ways in which one could model the same physical situation, with each employing different fundamental elements and concepts. If two such physical theories or models accurately predict the same events, one cannot be said to be more real than the other; rather we are free to use whichever model is more convenient.  (The Grand Design, p. 7)

I don’t think this is “relativism,” but if people insist that it is relativism, it’s not Kuhn who is the guilty party. Kuhn is simply exposing what scientists do.

Uncertainty, Debate, and Imprecision in Mathematics

If you remember anything about the mathematics courses you took in high school, it is that mathematics is the one subject in which there is absolute certainty and precision in all its answers. Unlike history, social science, and the humanities, which offer a variety of interpretations of subject matter, mathematics is unified and absolute.  Two plus two equals four and that is that. If you answer a math problem wrong, there is no sense in arguing a different interpretation with the teacher. Even the “hard sciences,” such as physics, may revise long-established conclusions, as new evidence comes in and new theories are developed. But mathematical truths are seemingly forever. Or are they?

You might not know it, but there has been a revolution in the human understanding of mathematics in the past 150 years that has undermined the belief that mathematics holds the key to absolute truth about the nature of the universe. Even as mathematical knowledge has increased, uncertainty has also increased, and different types of mathematics have been created that have different premises and are incompatible with each other. The value of mathematics remains clear. Mathematics increases our understanding, and science would not be possible without it. But the status of mathematics as a source of precise and infallible truth about reality is less clear.

For over 2000 years, the geometrical conclusions of the Greek mathematician Euclid were regarded as the most certain type of knowledge that could be obtained. Beginning with a small number of axioms, Euclid developed a system of geometry that was astonishing in breadth. The conclusions of Euclid’s geometry were regarded as absolutely certain, being derived from axioms that were “self-evident.”  Indeed, if one begins with “self-evident” truths and derives conclusions from those truths in a logical and verifiable manner, then one’s conclusions must also be undoubtedly true.

However, in the nineteenth century, these truths were undermined by the discovery of new geometries based on different axioms — the so-called “non-Euclidean geometries.” The conclusions of geometry were no longer absolute, but relative to the axioms that one chose. This became something of a problem for the concept of mathematical “proof.” If one can build different systems of mathematics based on different axioms, then “proof” only means that one’s conclusions are derivable from one’s axioms, not that one’s conclusions are absolutely true.

If you peruse the literature of mathematics on the definition of “axiom,” you will see what I mean. Many authors include the traditional definition of an axiom as a “self-evident truth.” But others define an axiom as a “definition” or “assumption,” seemingly as an acceptable alternative to “self-evident truth.” Surely there is a big difference between an “assumption,” a “self-evident truth,” and a “definition,” no? This confusing medley of definitions of “axiom” is the result of the nineteenth century discovery of non-Euclidean geometries. The issue has not been fully cleared up by mathematicians, but the Wikipedia entry on “axiom” probably represents the consensus of most mathematicians, when it states: “No explicit view regarding the absolute truth of axioms is ever taken in the context of modern mathematics, as such a thing is considered to be irrelevant.”  (!)

In reaction to the new uncertainty, mathematicians responded by searching for new foundations for mathematics, in the hopes of finding a set of axioms that would establish once and for all the certainty of mathematics. The “Foundations of Mathematics” movement, as it came to be called, ultimately failed. One of the leaders of the foundations movement, the great mathematician Bertrand Russell, declared late in life:

I wanted certainty in the kind of way in which people want religious faith. I thought that certainty is more likely to be found in mathematics than elsewhere. But I discovered that many mathematical demonstrations, which my teachers expected me to accept, were full of fallacies, and that, if certainty were indeed discoverable in mathematics, it would be in a new kind of mathematics, with more solid foundations than those that had hitherto been thought secure. But as the work proceeded, I was continually reminded of the fable about the elephant and the tortoise. Having constructed an elephant upon which the mathematical world could rest, I found the elephant tottering, and proceeded to construct a tortoise to keep the elephant from falling. But the tortoise was no more secure than the elephant, and after some twenty years of arduous toil, I came to the conclusion that there was nothing more that I could do in the way of making mathematical knowledge indubitable. (The Autobiography of Bertrand Russell)

Today, there are a variety of mathematical systems based on a variety of assumptions, and no one yet has succeeded in reconciling all the systems into one, fundamental, true system of mathematics. In fact, you wouldn’t know it from high school math, but some topics in mathematics have led to sharp divisions and debates among mathematicians. And most of these debates have never really been resolved — mathematicians have simply grown to tolerate the existence of different mathematical systems in the same way that ancient pagans accepted the existence of multiple gods.

Some of the most contentious issues in mathematics have revolved around the concept of infinity. In the nineteenth century, the mathematician Georg Cantor developed a theory about different sizes of infinite sets, but his arguments immediately attracted criticism from fellow mathematicians and remain controversial to this day. The central problem is that measuring infinity, assigning a quantity to infinity, is inherently an endless process. Once you think you have measured infinity, you simply add a one to it, and you have something greater than infinity — which means your original infinity was not truly infinite. Henri Poincare, one of the greatest mathematicians in history, rejected Cantor’s theory, noting: “Actual infinity does not exist. What we call infinite is only the endless possibility of creating new objects no matter how many exist already.”  Stephen Simpson, a mathematician at Pennsylvania University likewise argues “What truly infinite objects exist in the real world?” Objections to Cantor’s theory of infinity led to the emergence of new mathematical schools of thought such as finitism and intuitionism, which rejected the legitimacy of infinite mathematical objects.

Cantor focused his mental energies on concepts of the infinitely large, but another idea in mathematics was also controversial — that of the infinitely small, the “infinitesimal.” To give you an idea of how controversial the infinitesimal has been, I note that Cantor himself rejected the existence of infinitesimals! In Cantor’s view, the concept of something being infinitely small was inherently contradictory — if something is small, then it is inherently finite! And yet, infinitesimals have been used by mathematicians for hundreds of years. The infinitesimal was used by Leibniz in his version of calculus, and it is used today in the field of mathematics known as “non-standard analysis.” There is still no consensus among mathematicians today about the existence or legitimacy of infinitesimals, but infinitesimals, like imaginary numbers, seem to be useful in calculations, and as long as it works, mathematicians are willing to tolerate them, albeit not without some criticism.

The existence of different types of mathematical systems leads to some strange and contradictory answers to some of the simplest questions in mathematics. In school, you were probably taught that parallel lines never meet. That is true in Euclidean geometry, but not in hyperbolic geometry. In projective geometry, parallel lines meet at infinity!

Or consider the infinite decimal 0.9999 . . .  Is this infinite decimal equal to 1? The common sense answer that students usually give is “of course not.” But most mathematicians argue that both numbers are equivalent! Their logic is as follows: in the system of “real numbers,” there is no number between 0.999. . . and 1. Therefore, if you subtract 0.999. . .  from 1, the result is zero. And that means both numbers are the same!

However, in the system of numbers known as “hyperreals,” a system which includes infinitesimals, there exists an infinitesimal number between 0.999. . .  and 1. So under this system, 0.999. . .  and 1 are NOT the same! (A great explanation of this paradox is here.) So which system of numbers is the correct one? There is no consensus among mathematicians. But there is a great joke:

How many mathematicians does it take to screw in a light bulb?

0.999 . . .

The invention of computers has led to the creation of a new system of mathematics known as “floating point arithmetic.” This was necessary because, for all of their amazing capabilities, computers do not have enough memory or processing capability to precisely deal with all of the real numbers. To truly depict an infinite decimal, a computer would need an infinite amount of memory. So floating point arithmetic deals with this problem by using a degree of approximation.

One of the odd characteristics of the standard version of floating point arithmetic is that there is not one zero, but two zeros: a positive zero and a negative zero. What’s that you say? There’s no such thing as positive zero and negative zero? Well, not in the number system you were taught, but these numbers do exist in floating point arithmetic. And you can use them to divide by zero, which is something else I bet you thought you couldn’t do.  One divided by positive zero equals positive infinity, while one divided by negative zero equals negative infinity!

What the history of mathematics indicates is that the world is not converging toward one, true system of mathematics, but creating multiple, incompatible systems of mathematics, each of which has its own logic. If you think of mathematics as a set of tools for understanding reality, rather than reality itself, this makes sense. You want a variety of tools to do different things. Sometimes you need a hammer, sometimes you need a socket wrench, sometimes you need a Phillips screwdriver, etc. The only true test of a tool is how useful it is — a single tool that tried to do everything would be unhelpful.

You probably didn’t know about most of the issues in mathematics I have just mentioned, because they are usually not taught, either at the elementary school level, the high school level, or even college. Mathematics education consists largely of being taught the right way to perform a calculation, and then doing a variety of these calculations over and over and over. . . .

But why is that? Why is mathematics education just about learning to calculate, and not discussing controversies? I can think of several reasons.

One reason may be that most people who go into mathematics tend to have a desire for greater certainty. They don’t like uncertainty and imprecise answers, so they learn math, avoid mathematical controversies or ignore them, and then teach students a mathematics without uncertainty. I recall my college mathematics instructor declaring to class one day that she went into mathematics precisely because it offered sure answers. My teacher certainly had that much in common with Bertrand Russell (quoted above).

Another reason surely is that there is a large element of indoctrination in education generally, and airing mathematical controversies among students might have the effect of undermining authority. It is true that students can discuss controversies in the social sciences and humanities, but that’s because we live in a democratic society in which there are a variety of views on social issues, and no one group has the power to impose a single view on the classroom. But even a democratic society is not interested in teaching controversies in mathematics — it’s interested in creating good workers for the economy. We need people who can make change, draw up a budget, and measure things, not people who challenge widely-accepted beliefs.

This utilitarian view of mathematics education seems to be universal, shared by democratic and totalitarian governments alike. Forcing students to perform endless calculations without allowing them to ask “why” is a great way to bore children and make them hate math, but at least they’ll be obedient citizens.

What is “Mythos” and “Logos”?

The terms “mythos” and “logos” are used to describe the transition in ancient Greek thought from the stories of gods, goddesses, and heroes (mythos) to the gradual development of rational philosophy and logic (logos). The former is represented by the earliest Greek thinkers, such as Hesiod and Homer; the latter is represented by later thinkers called the “pre-Socratic philosophers” and then Socrates, Plato, and Aristotle. (See the book: From Myth to Reason? Studies in the Development of Greek Thought).

In the earliest, “mythos” stage of development, the Greeks saw events of the world as being caused by a multitude of clashing personalities — the “gods.” There were gods for natural phenomena such as the sun, the sea, thunder and lightening, and gods for human activities such as winemaking, war, and love. The primary mode of explanation of reality consisted of highly imaginative stories about these personalities. However, as time went on, Greek thinkers became critical of the old myths and proposed alternative explanations of natural phenomena based on observation and logical deduction. Under “logos,” the highly personalized worldview of the Greeks became transformed into one in which natural phenomena were explained not by invisible superhuman persons, but by impersonal natural causes.

However, many scholars argue that there was not such a sharp distinction between mythos and logos historically, that logos grew out of mythos, and elements of mythos remain with us today.

For example, ancient myths provided the first basic concepts used subsequently to develop theories of the origins of the universe. We take for granted the words that we use every day, but the vast majority of human beings never invent a single word or original concept in their lives — they learn these things from their culture, which is the end-product of thousands of years of speaking and writing by millions of people long-dead. The very first concepts of “cosmos,” “beginning,” nothingness,” and differentiation from a single substance — these were not present in human culture for all time, but originated in ancient myths. Subsequent philosophers borrowed these concepts from the myths, while discarding the overly-personalistic interpretations of the origins of the universe. In that sense, mythos provided the scaffolding for the growth of philosophy and modern science. (See Walter Burkert, “The Logic of Cosmogony” in From Myth to Reason: Studies in the Development of Greek Thought.)

An additional issue is the fact that not all myths are wholly false. Many myths are stories that communicate truths even if the characters and events in the story are fictional. Socrates and Plato denounced many of the early myths of the Greeks, but they also illustrated philosophical points with stories that were meant to serve as analogies or metaphors. Plato’s allegory of the cave, for example, is meant to illustrate the ability of the educated human to perceive the true reality behind surface impressions. Could Plato have made the same philosophical point in a literal language, without using any stories or analogies? Possibly, but the impact would be less, and it is possible that the point would not be effectively communicated at all.

Some of the truths that myths communicate are about human values, and these values can be true even if the stories in which the values are embedded are false. Ancient Greek religion contained many preposterous stories, and the notion of personal divine beings directing natural phenomena and intervening in human affairs was false. But when the Greeks built temples and offered sacrifices, they were not just worshiping personalities — they were worshiping the values that the gods represented. Apollo was the god of light, knowledge, and healing; Hera was the goddess of marriage and family; Aphrodite was the goddess of love; Athena was the goddess of wisdom; and Zeus, the king of the gods, upheld order and justice. There’s no evidence at all that these personalities existed or that sacrifices to these personalities would advance the values they represented. But a basic respect for and worshipful disposition toward the values the gods represented was part of the foundation of ancient Greek civilization. I don’t think it was a coincidence that the city of Athens, whose patron goddess was Athena, went on to produce some of the greatest philosophers the world has seen — love of wisdom is the prerequisite for knowledge, and that love of wisdom grew out of the culture of Athens. (The ancient Greek word philosophia literally means “love of wisdom.”)

It is also worth pointing out that worship of the gods, for all of its superstitious aspects, was not incompatible with even the growth of scientific knowledge. Modern western medicine originated in the healing temples devoted to the god Asclepius, the son of Apollo, and the god of medicine. Both of the great ancient physicians Hippocrates and Galen are reported to have begun their careers as physicians in the temples of Asclepius, the first hospitals. Hippocrates is widely regarded as the father of western medicine and Galen is considered the most accomplished medical researcher of the ancient world. As love of wisdom was the prerequisite for philosophy, reverence for healing was the prerequisite for the development of medicine.

Karen Armstrong has written that ancient myths were never meant to be taken literally, but were “metaphorical attempts to describe a reality that was too complex and elusive to express in any other way.” (A History of God) I am not sure that’s completely accurate. I think it most likely that the mass of humanity believed in the literal truth of the myths, while educated human beings understood the gods to be metaphorical representations of the good that existed in nature and humanity. Some would argue that this use of metaphors to describe reality is deceptive and unnecessary. But a literal understanding of reality is not always possible, and metaphors are widely used even by scientists.

Theodore L. Brown, a professor emeritus of chemistry at the University of Illinois at Urbana-Champaign, has provided numerous examples of scientific metaphors in his book, Making Truth: Metaphor in Science. According to Brown, the history of the human understanding of the atom, which cannot be directly seen, began with a simple metaphor of atoms as billiard balls; later, scientists compared atoms to plum pudding; then they compared the atom to our solar system, with electrons “orbiting” around a nucleus. There has been a gradual improvement in our models of the atom over time, but ultimately, there is no single, correct literal representation of the atom. Each model illustrates an aspect or aspects of atomic behavior — no one model can capture all aspects accurately. Even the notion of atoms as particles is not fully accurate, because atoms can behave like waves, without a precise position in space as we normally think of particles as having. The same principle applies to models of the molecule as well. (Brown, chapters, 4-6)  A number of scientists have compared the imaginative construction of scientific models to map-making — there is no single, fully accurate way to map the earth (using a flat surface to depict a sphere), so we are forced to use a variety of maps at different scales and projections, depending on our needs.

Sometimes the visual models that scientists create are quite unrealistic. The model of the “energy landscape” was created by biologists in order to understand the process of protein folding — the basic idea was to imagine a ball rolling on a surface pitted with holes and valleys of varying depth. As the ball would tend to seek out the low points on the landscape (due to gravity), proteins would tend to seek the lowest possible free energy state. All biologists know the energy landscape model is a metaphor — in reality, proteins don’t actually go rolling down hills! But the model is useful for understanding a process that is highly complex and cannot be directly seen.

What is particularly interesting is that some of the metaphorical models of science are frankly anthropomorphic — they are based on qualities or phenomena found in persons or personal institutions. Scientists envision cells as “factories” that accept inputs and produce goods. The genetic structure of DNA is described as having a “code” or “language.” The term “chaperone proteins” was invented to describe proteins that have the job of assisting other proteins to fold correctly; proteins that don’t fold correctly are either treated or dismantled so that they do not cause damage to the larger organism — a process that has been given a medical metaphor: “protein triage.” (Brown, chapters 7-8) Even referring to the “laws of physics” is to use a metaphorical comparison to human law. So even as logos has triumphed over the mythos conception that divine personalities rule natural phenomena, qualities associated with personal beings have continued to sneak into modern scientific models.

The transition of a mythos-dominated worldview to a logos-dominated worldview was a stupendous achievement of the ancient Greeks, and modern philosophy, science, and civilization would not be possible without it. But the transition did not involve a complete replacement of one worldview with another, but rather the building of additional useful structures on top of a simple foundation. Logos grew out of its origins in mythos, and retains elements of mythos to this day. The compatibilities and conflicts between these two modes of thought are the thematic basis of this website.

Related: A Defense of the Ancient Greek Pagan Religion

Einstein’s Judeo-Quaker Pantheism

I recently came across a fascinating website, Einstein: Science and Religion, which I hope you will find time to peruse.  The website, edited by Arnold Lesikar, Professor Emeritus in the  Department of Physics, Astronomy, and Engineering Science at St. Cloud State University in Minnesota, contains a collection of Einstein’s various comments on religion, God, and the relationship between science and religion.

Einstein’s views on religion have been frequently publicized and commented on, but it is difficult to get an accurate and comprehensive assessment of Einstein’s actual views on religion because of the tendency of both believers and atheists to cherry-pick particular quotations or to quote out of context. Einstein’s actual views on religion are complex and multifaceted, and one is apt to get the wrong impression by focusing on just one or several of Einstein’s comments.

One should begin by noting that Einstein did not accept the notion of a personal God, an omnipotent superbeing who listens to our prayers and intervenes in the operations of the laws of the universe. Einstein repeatedly rejected this notion of God throughout his life, from his adolescence to old age. He also believed that many, if not most, of the stories in the Bible were untrue.

The God Einstein did believe in was the God of the philosopher Spinoza. Spinoza conceived of God as being nothing more than the natural order underlying this universe — this order was fundamentally an intelligent order, but it was a mistake to conceive of God as having a personality or caring about man. Spinoza’s view was known as pantheism, and Einstein explicitly stated that he was a proponent of Spinoza and of pantheism. Einstein also argued that ethical systems were a purely human concern, with no superhuman authority figure behind them, and there was no afterlife in which humans could be rewarded or punished. In fact, Einstein believed that immortality was undesirable anyway. Finally, Einstein sometimes expressed derogatory views of religious institutions and leaders, believing them responsible for superstition and bigotry among the masses.

However, it should also be noted that Einstein’s skepticism and love of truth was too deep to result in a rigid and dogmatic atheism. Einstein described himself variously as an agnostic or pantheist and disliked the arrogant certainty of atheists. He even refused to definitively reject the idea of a personal God, believing that there were too many mysteries behind the universe to come to any final conclusions about God. He also wrote that he did not want to destroy the idea of a personal God in the minds of the masses, because even a primitive metaphysics was better than no metaphysics at all.

Even while rejecting the notion of a personal God, Einstein described God as a spirit, a spirit with the attribute of thought or intelligence: “[E]very one who is seriously involved in the pursuit of science becomes convinced that a spirit is manifest in the laws of the Universe — a spirit vastly superior to that of man, and one in the face of which we with our modest powers must feel humble.” In an interview, Einstein expressed a similar view:

If there is any such concept as a God, it is a subtle spirit, not an image of a man that so many have fixed in their minds. In essence, my religion consists of a humble admiration for this illimitable superior spirit that reveals itself in the slight details that we are able to perceive with our frail and feeble minds.

Distinguishing between the religious feeling of the “naïve man” and the religious feeling of the scientist, Einstein argued:  “[The scientist’s] religious feeling takes the form of a rapturous amazement at the harmony of natural law, which reveals an intelligence of such superiority that, compared with it, all the systematic thinking and acting of human beings is an utterly insignificant reflection.”

While skeptical and often critical of religious institutions, Einstein also believed that religion played a valuable and necessary role for civilization in creating “superpersonal goals” for human beings, goals above and beyond self-interest, that could not be established by pure reason.  Reason could provide us with the facts of existence, said Einstein, but the question of how we should live our lives necessarily required going beyond reason. According to Einstein:

[T]he scientific method can teach us nothing else beyond how facts are related to, and conditioned by, each other.The aspiration toward such objective knowledge belongs to the highest of which man is capabIe, and you will certainly not suspect me of wishing to belittle the achievements and the heroic efforts of man in this sphere. Yet it is equally clear that knowledge of what is does not open the door directly to what should be. . . . Objective knowledge provides us with powerful instruments for the achievements of certain ends, but the ultimate goal itself and the longing to reach it must come from another source. . . .

To make clear these fundamental ends and valuations, and to set them fast in the emotional life of the individual, seems to me precisely the most important function which religion has to perform in the social life of man. And if one asks whence derives the authority of such fundamental ends, since they cannot be stated and justified merely by reason, one can only answer: they exist in a healthy society as powerful traditions, which act upon the conduct and aspirations and judgments of the individuals; they are there, that is, as something living, without its being necessary to find justification for their existence. They come into being not through demonstration but through revelation, through the medium of powerful personalities. One must not attempt to justify them, but rather to sense their nature simply and clearly.

Einstein even argued that the establishment of moral goals by religious prophets was one of the most important accomplishments of humanity, eclipsing even scientific accomplishment:

Our time is distinguished by wonderful achievements in the fields of scientific understanding and the technical application of those insights. Who would not be cheered by this? But let us not forget that knowledge and skills alone cannot lead humanity to a happy and dignified life. Humanity has every reason to place the proclaimers of high moral standards and values above the discoverers of objective truth. What humanity owes to personalities like Buddha, Moses, and Jesus ranks for me higher than all the achievements of the enquiring and constructive mind.

Einstein’s views of Jesus are particularly intriguing. Einstein never rejected his Jewish identity and refused all attempts by others to convert him to Christianity. Einstein also refused to believe the stories of Jesus’s alleged supernatural powers. But Einstein also believed the historical existence of Jesus was a fact, and Einstein regarded Jesus as one the greatest — if not the greatest — of religious prophets:

As a child, I received instruction both in the Bible and in the Talmud. I am a Jew, but I am enthralled by the luminous figure of the Nazarene. . . . No one can read the Gospels without feeling the actual presence of Jesus. His personality pulsates in every word. No myth is filled with such life. How different, for instance, is the impression which we receive from an account of legendary heroes of antiquity like Theseus. Theseus and other heroes of his type lack the authentic vitality of Jesus. . . .No man can deny the fact that Jesus existed, nor that his sayings are beautiful. Even if some them have been said before, no one has expressed them so divinely as he.

Toward the end of his life, Einstein, while remaining Jewish, expressed great admiration for the Christian sect known as the Quakers. Einstein stated that the “Society of Friends,” as the Quakers referred to themselves as, had the “highest moral standards” and their influence was “very beneficial.” Einstein even declared “If I were not a Jew I would be a Quaker.”

Now Einstein’s various pronouncements on religion are scattered in multiple sources, so it is not surprising that people may get the wrong impression from examining just a few quotes. Sometimes stories of Einstein’s religious views are simply made up, implying that Einstein was a traditional believer. Other times, atheists will emphasize Einstein’s rejection of a personal God, while completely overlooking Einstein’s views on the limits of reason, the necessity of religion in providing superpersonal goals, and the value of the religious prophets.

For some people, a religion without a personal God is not a true religion. But historically, a number of major religions do not hold belief in a personal God as central to their belief system, including Taoism, Buddhism, and Confucianism. In addition, many theologians in monotheistic faiths describe God in impersonal terms, or stress that the attributes of God may be represented symbolically as personal, but that God himself cannot be adequately described as a person. The great Jewish theologian Maimonides argued that although God had been described allegorically and imperfectly by the prophets as having the attributes of a personal being, God did not actually have human thoughts and emotions. The twentieth century Christian theologian Paul Tillich argued that God was not “a being” but the “Ground of Being” or the “Power of Being” existing in all things.

However, it is somewhat odd is that while rejecting the notion of a personal God, Einstein saw God as a spirit that seemingly possessed an intelligence far greater than that of human beings. In that, Einstein was similar to Spinoza, who believed God had the attribute of “thought” and that the human mind was but part of the “infinite intellect of God.”  But is not intelligence a quality of personal beings? In everyday life, we don’t think of orbiting planets or stars or rocks or water as possessing intelligence, and even if we attribute intelligence to lower forms of life such as bacteria and plants, we recognize that this sort of intelligence is primitive. If you ask people what concrete, existing things best possess the quality of intelligence, they will point to humans — personal beings! Yet, both Spinoza and Einstein attribute vast, or even infinite, intelligence to God, while denying that God is a personal being!

I am not arguing that Spinoza and Einstein were wrong or somehow deluding themselves when they argued that God was not a personal being. I am simply pointing out how difficult it is to adequately and accurately describe God. I think Spinoza and Einstein were correct in seeking to modify the traditional concept of God as a type of omnipotent superperson with human thoughts and emotions. But at the same time, it can be difficult to describe God in a way that does not use attributes that are commonly thought of as belonging to personal beings. At best, we can use analogies from everyday experience to indirectly describe God, while acknowledging that all analogies fall short.

 

What Are the Laws of Nature? – Part Two

In a previous post, I discussed the mysterious status of the “laws of nature,” pointing out that these laws seem to be eternal, omnipresent, and possessing enormous power to shape the universe, although they have no mass and no energy.

There is, however, an alternative view of the laws of nature proposed by thinkers such as Ronald Giere and Nancy Cartwright, among others. In this view, it is a fallacy to suppose that the laws of nature exist as objectively real entities — rather, what we call the laws of nature are simplified models that the human mind creates to explain and predict the operations of the universe. The laws were created by human beings to organize information about the cosmos. As such, the laws are not fully accurate descriptions of how the universe actually works, but generalizations; and like nearly all generalizations, there are numerous exceptions when the laws are applied to particular circumstances. We retain the generalizations because they excel at organizing and summarizing vast amounts of information, but we should never make the mistake of assuming that the generalizations are real entities. (See Science Without Laws and How the Laws of Physics Lie.)

Consider one of the most famous laws of nature, Isaac Newton’s law of universal gravitation. According to this law, the gravitational relationship between any two bodies in the universe is determined by the size (mass) of the two bodies and their distance from each other. More specifically, any two bodies in the universe attract each other with a force that is (1) directly proportional to the product of their masses and (2) inversely proportional to the square of the distance between them.  The equation is quite simple:

F = G \frac{m_1 m_2}{r^2}\

where F is the force between two masses, G is a gravitational constant, m1 and m2 are the masses of the two bodies and r is the distance between the center of the two bodies.

Newton’s law was quite valuable in helping predict the motions of the planets in our solar system, but in some cases the formula did not quite match to astronomical observations. The orbit of the planet Mercury in particular never fit Newton’s law, no matter how much astronomers tried to fiddle with the law to get the right results. It was only when Einstein introduced his theory of relativity that astronomers could correctly predict the motions of all the planets, including Mercury. Why did Einstein’s theory work better for Mercury? Because as the planet closest to the sun, Mercury is most affected by the massive gravitation of the sun, and Newton’s law becomes less accurate under the conditions of massive gravitation.

Einstein’s equations for gravity are known as the “field equations,” and although they are better at predicting the motions of the planets, they are extremely complex — too complex really for many situations. In fact, physicist Stephen Hawking has noted that scientists still often use Newton’s law of gravity because it is much simpler and a good enough approximation in most cases.

So what does this imply about the reality of Newton’s law of universal gravitation? Does Newton’s law float around in space or in some transcendent realm directing the motions of the planets, until the gravitation becomes too large, and then it hands off its duties to the Einstein field equations? No, of course not. Newton’s law is an approximation that works for many, but not all cases. Physicists use it because it is simple and “good enough” for most purposes. When the approximations become less and less accurate, a physicist may switch to the Einstein field equations, but this is a human value judgment, not the voice of nature making a decision to switch equations.

One other fact is worth noting: in Newton’s theory, gravity is a force between two bodies. In Einstein’s theory, gravity is not a real force — what we call a gravitational force is simply how we perceive the distortion of the space-time fabric caused by massive objects. Physicists today refer to gravity as a “fictitious force.” So why do professors of physics continue to use Newton’s law and teach this “fictitious force” law to their students? Because it is simpler to use and still a good enough approximation for most cases. Newton’s law can’t possibly be objectively real — if it is, Einstein is wrong.

The school of thought known as “scientific realism” would dispute these claims, arguing that even if the laws of nature as we know them are approximations, there are still real, objective laws underneath these approximations, and as science progresses, we are getting closer and closer to knowing what these laws really are. In addition, they argue that it would be absurd to suppose that we can possibly make progress in technology unless we are getting better and better in knowing what the true laws are really like.

The response of Ronald Giere and Nancy Cartwright to the realists is as follows: it’s a mistake to assume that if our laws are approximations and our approximations are getting better and better that therefore there must be real laws underneath. What if nature is inherently so complex in its causal variables and sequences that there is no objectively real law underneath it all? Nancy Cartwright notes that engineers who must build and maintain technological devices never apply the “laws of nature” directly to their work without a great deal of tinkering and modifications to get their mental models to match the specific details of their device. The final blueprint that engineers may create is a highly specific and highly complex model that is a precise match for the device, but of very limited generalizability to the universe as a whole. In other words, there is an inherent and unavoidable tradeoff between explanatory power and accuracy. The laws of nature are valued by us because they have very high explanatory power, but specific circumstances are always going to involve a mix of causal forces that refute the predictions of the general law. In order to understand how two bodies behave, you not only need to know gravity, you need to know the electric charge of the two bodies, the nuclear force, any chemical forces, the temperature, the speed of the objects, and additional factors, some of which can never be calculated precisely. According to Cartwright,

. . . theorists tend to think that nature is well-regulated; in the extreme, that there is a law to cover every case. I do not. I imagine that natural objects are much like people in societies. Their behavior is constrained by some specific laws and by a handful of general principles, but it is not determined in detail, even statistically. What happens on most occasions is dictated by no law at all. . . . God may have written just a few laws and grown tired. We do not know whether we are living in a tidy universe or an untidy one. (How the Laws of Physics Lie, p. 49)

Cartwright makes it clear that she believes in causal powers in nature — it’s just that causal powers are not the same as laws, which are simply general principles for organizing information.

Some philosophers and scientists would go even further. They argue that science is able to develop and improve models for predicting phenomena, but the underlying nature of reality cannot be grasped directly, even if our models are quite excellent at predicting. This is because there are always going to be aspects of nature that are non-observable and there are often multiple theories that can explain the same phenomenon. This school of thought is known as instrumentalism.

Stephen Hawking appears to be sympathetic to such a view. In a discussion of his use of “imaginary time” to model how the universe developed, Hawking stated “a scientific theory is just a mathematical model we make to describe our observations: it exists only in our minds. So it is meaningless to ask: which is real, “real” or “imaginary” time? It is simply a matter of which is the more useful description.” (A Brief History of Time, p. 144) In a later essay, Hawking made the case for what he calls “model-dependent realism.” He argues:

it is pointless to ask whether a model is real, only whether it agrees with observation. If two models agree with observation, neither one can be considered more real than the other. A person can use whichever model is more convenient in the situation under consideration. . . . Each theory may have its own version of reality, but according to model-dependent realism, that diversity is acceptable, and none of the versions can be said to be more real than any other.

Hawking concludes that given these facts, it may well be impossible to develop a unified theory of everything, that we may have to settle for a diversity of models. (It’s not clear to me how Hawking’s “model-dependent realism” differs from instrumentalism, since they seem to share many aspects.)

Intuitively, we are apt to conclude that our progress in technology is proof enough that we are understanding reality better and better, getting closer and closer to the Truth. But it’s actually quite possible for science to develop better and better predictive models while still retaining very serious doubts and disputes about many fundamental aspects of reality. Among physicists and cosmologists today, there is still disagreement on the following issues: are there really such things as subatomic particles, or are these entities actually fields, or something else entirely?; is the flow of time an illusion, or is time the chief fundamental reality?; are there an infinite number of universes in a wider multiverse, with infinite versions of you, or is this multiverse theory a mistaken interpretation of uncertainty at the quantum level?; are the constants of the universe really constant, or do they sometimes change?; are mathematical objects themselves the ultimate reality, or do they exist only in the mind? A number of philosophers of science have concluded that science does indeed progress by creating more and better models for predicting, but they make an analogy to evolution: life forms may be advancing and improving, but that doesn’t mean they are getting closer and closer to some final goal.

Referring back to my previous post, I discussed the view that the “laws of nature” appear to exist everywhere and have the awesome power to shape the universe and direct the motions of the stars and planets, despite the fact that the laws themselves have no matter and no energy. But if the laws of nature are creations of our minds, what then? I can’t prove that there are no real laws behind the mental models that we create. It seems likely that there must be some such laws, but perhaps they are so complex that the best we can do is create simplified models of them. Or perhaps we must acknowledge that the precise nature of the cosmological order is mysterious, and any attempt to understand and describe this order must use a variety of concepts, analogies, and stories created by our minds. Some of these concepts, analogies, and stories are clearly better than others, but we will never find one mental model that is a perfect fit for all aspects of reality.

What Are the Laws of Nature?

According to modern science, the universe is governed by laws, and it is the job of scientists to discover those laws. However, the question of where these laws come from, and what their precise nature is, remains mysterious.

If laws are all that are needed to explain the origins of the universe, the laws must somehow have existed prior to the universe, that is, eternally. But this raises some puzzling issues. Does it really make sense to think of the law of gravity as existing before the universe existed, before gravity itself existed, before planets, stars, space, and time existed?  Does it make sense to speak of the law of conservation of mass existing before mass existed? For that matter, does it make sense to speak of Mendel’s laws of genetics existing before there was DNA, before there were nucleotides to make up DNA, before there were even atoms of carbon and nitrogen to make up nucleotides? It took the universe 150 million years to 1 billion years to create the first heavy elements, including atoms of carbon and nitrogen. Were Mendel’s laws of genetics sitting around impatiently that whole time waiting for something to happen? Or does it make sense to think of laws evolving with the universe, in which case we still have a chicken-egg question — did evolving laws precede the creation of material forms or did evolving material forms precede the laws?

Furthermore, where do the laws of nature exist? Do they exist in some other-worldly Platonic realm beyond time and space? Many, if not most, mathematicians and physicists are inclined to believe that mathematical equations run the universe, and these equations exist objectively. But if laws/equations govern the operations of the universe, they must exist everywhere, even though we can’t sense them directly at all. Why? Because, according to Einstein, information cannot travel instantaneously across large distances – in fact, information cannot travel faster than the speed of light. Now, the radius of the universe is 46 billion light years, so if we imagine the laws of nature floating around in space at the center of the universe, it would take at least 46 billion years for the commands issued by the laws of nature to reach the edge of the universe — much too slow. Even within our tiny solar system, it takes a little over 8 minutes for light from the sun to reach the earth, so information flow across even that small distance would involve a significant time lag. However, our astronomical observations indicate no lag time — the effect of laws is instantaneous, indicating that the laws must exist everywhere — in other words, laws of nature have the property of omnipresence.

What sort of power do the laws of nature have? Since they direct the operations of the universe, they must have immense power. Either they have the capability to directly shape and move stars, planets, and entire galaxies, or they simply issue commands that stars, planets, and galaxies follow. In either case, should not this power be detectable as a form of energy? And if it is a form of energy, shouldn’t this energy have the potential to be converted into matter, according to the principle of mass-energy equivalence? In that case, the laws of nature should, in principle, be observable as energy or mass. But the laws of nature appear to have no detectable energy and no detectable mass.

Finally, there is the question of the fundamental unity of the laws of nature, and where that unity comes from. A mere collection of unconnected laws does not necessarily bring about order. Laws have to be integrated in a harmonic fashion so that they establish a foundation of order and allow the evolution of increasingly complex forms, from hydrogen atoms to heavier atomic elements to molecules to DNA to complex life forms to intelligent life forms. The fact of the matter is that it does not take much variation in the values of certain physical principles to cause a collapse of the universe or the development of a universe that is incapable of supporting life. According to physicist Paul Davies:

There are endless ways in which the universe might have been totally chaotic. It might have had no laws at all, or merely an incoherent jumble of laws that caused matter to behave in disorderly or unstable ways. . . . the various force of nature are not just a haphazard conjunction of disparate influences. They dovetail together in a mutually supportive way which bestows upon nature  stability and harmony. . .  (The Mind of God: The Scientific Basis for a Rational World, pp. 195-96)

There is a counterargument to this claim of essential unity in the laws of nature: according to theories of the multiverse, new universes are constantly being created with different physical laws and parameters — we just happen to live in a universe that supports life because only a universe that supports life can have observers who speculate about the orderliness of the universe! However, multiverse theories have been widely criticized for being non-falsifiable, since we can’t directly observe other universes.

So, if we are the believe the findings of modern science, the laws of nature have the following characteristics:

  1. They have existed eternally, prior to everything.
  2. They are omnipresent – they exist everywhere.
  3. They are extremely powerful, though they have no energy and no mass.
  4. They are unified and integrated in such a way as to allow the development of complex forms, such as life (at least in this universe, the only universe we can directly observe).

Are these not the characteristics of a universal spirit? Moreover, is not this spirit by definition supernatural, i.e., existing above nature and responsible for the operations of nature?

Please note that I am not arguing here that the laws of nature prove the existence of a personal God who is able to shape, reshape, and interfere with the laws of nature anytime He wishes. I think that modern science has more than adequately demonstrated that the idea of a personal being who listens to our prayers and temporarily suspends or adjusts the laws of nature in response to our prayers or sins is largely incompatible with the evidence we have accumulated over hundreds of years. Earthquakes happen because of shifting tectonic plates, not because certain cities have committed great evils. Disease happens because viruses and bacteria mutate, reproduce, and spread, not because certain people deserve disease. And despite the legend of Moses saving the Jews by parting the Red Sea and then destroying the Pharaoh’s army, God did not send a tsunami to wipe out the Nazis — the armies of the Allied Forces had to do that.

What I am arguing is that if you look closely at what modern science claims about the laws of nature, there is not much that separates these laws from the concept of a universal spirit, even if this spirit is not equivalent to an omnipotent, personal God.

The chief objection to the idea of the laws of nature as a universal spirit is that the laws of nature have the characteristics of mindless regularity and determinism, which are not the characteristics we think of when we think of a spirit. But consider this: the laws of nature do not in fact dictate invariable regularities in all domains, but in fact allow scope for indeterminacy, freedom, and creativity.

Consider activity at the subatomic level. Scientists have studied the behavior of subatomic particles for many decades, and they have discovered laws of behavior for those particles, but the laws are probabilistic, not deterministic. Physicist Richard Feynman, who won a Nobel Prize for his work on the physics of subatomic particles, described the odd world of subatomic behavior as follows: “The electron does whatever it likes.” It travels through space and time in all possible ways, and can even travel backward in time! Feynman was able to offer guidance on how to predict the future location of an electron, but only in terms of a probability based on calculating all the possible paths that the electron could choose.

This freedom on the subatomic level manifests itself in behavior on the atomic level, particularly in the element known as carbon. As Robert Pirsig notes:

One physical characteristic that makes carbon unique is that it is the lightest and most active of the group IV atoms whose chemical bonding characteristics are ambiguous. Usually the positively valanced metals in groups I through III combine chemically with negatively valanced nonmetals in groups V through VII and not with other members of their own group. But the group containing carbon is halfway between the metals and nonmetals, so that sometimes carbon combines with metals and sometimes with nonmetals and sometimes it just sits there and doesn’t combine with anything, and sometimes it combines with itself in long chains and branched trees and rings. . . . this ambiguity of carbon’s bonding preferences was the situation the weak Dynamic subatomic forces needed. Carbon bonding was a balanced mechanism they could take over. It was a vehicle they could steer to all sorts of freedom by selecting first one bonding preference and then another in an almost unlimited variety of ways. . . . Today there are more than two million known compounds of carbon, roughly twenty times as many as all the other known chemical compounds in the world. The chemistry of life is the chemistry of carbon. What distinguishes all the species of plants and animals is, in the final analysis, differences in the way carbon atoms choose to bond. (Lila, p. 168.)

And the life forms constructed by carbon atoms have the most freedom of all — which is why there are few invariable laws in biology that allow predictions as accurate as the predictions of physical systems. A biologist will never be able to predict the motion and destiny of a life form in the same way an astrophysicist can predict the motion of the planets in a solar system.

If you think about the nature of the universal order, regularity and determinism is precisely what is needed on the largest scale (stars, planets, and galaxies), with spontaneity and freedom restricted to the smaller scale of the subatomic/atomic and biological. If stars and planets were as variable and unpredictable as subatomic particles and life forms, there would be no stable solar systems, and no way for life to develop. Regularity and determinism on the large scale provides the stable foundation and firm boundaries needed for freedom, variety, and experimentation on the small scale. In this conception, universal spirit contains the laws of nature, but also has a freedom that goes beyond the laws.

However, it should be noted that there is another view of the laws of nature. In this view, the laws of nature do not have any existence outside of the human mind — they are simply approximate models of the cosmic order that human minds create to understand that order. This view will be discussed in a subsequent post.

What Are the Laws of Nature? – Part Two

Religion as a Source of Evil – Part 2

In a previous post, I critically examined the claim of contemporary atheists that religion, and more broadly a lack of reason, has been a predominant cause of evil in history.  In response, I argued that evil in religion was an expression of deeper causes rooted in human nature, so abolishing religion would not address the fundamental problem of evil.  In addition, I argued that reason itself could not be a solution to evil because reason was too easily used as a tool of self-interest.  However, even after accounting for the deeper causes of evil, there remained a difficult question: what good is religion if it does not actually make human beings better?

This question faced one Christian pastor who was horrified by the easy accommodation of Christian churches in Germany to the Nazi party in the 1930s: Dietrich Bonhoeffer.  Bonhoeffer’s response to the tragic development of Christianity in Germany will be examined briefly here.

Contrary to the claims of many atheists, the Christian churches in Germany were not exactly steadfast allies of the Nazis.  Leading Nazis despised Christanity because of its alleged superstitions and it’s compassion for the weak, and in the long term Hitler wanted to abolish Christianity.  However, Hitler knew he could not undertake too many battles at once and he did not want to cause division and turmoil in Germany while he needed national unity.  On the other hand, the Christian churches, while opposed to a number of elements of Nazi doctrine, wanted to survive, and largely agreed with Hitler’s policy of restoring German greatness.  So both sides struck a bargain, in which the Nazis permitted the continued existence of the churches as long as they did not challenge the secular authority of Hitler and the Nazis.  Moreover, a “German Christian” movement arose which attempted to reconcile Christianity and Nazism.

A number of leading Christians rebelled at this corrupt bargain, among them Dietrich Bonhoeffer, one of the founders of the anti-Nazi Confessing Church.  Bonhoeffer initially attempted peaceful resistance to the Nazis, later fled to the United States, but then returned to Germany in 1939.  Bomhoeffer made contacts with anti-Nazi resisters in German military intelligence, some of whom were involved in various assassination plots against Hitler.  When this underground movement was discovered, Bonhoeffer, already imprisoned by the Nazis, was hanged in April 1945.

In historical retrospect, Bonhoeffer is recognized as being one of the few Christian leaders in Germany who bravely resisted the Nazis and was willing to sacrifice his life for his Christian ideas.  As such Bonhoeffer is an inspiration to many, but it’s impossible to recognize the other side of the Bonhoeffer phenomenon — the fact that he was a definite minority, that most German Christians went along with the Nazis willingly and even participated in some of the Nazis’ greatest crimes.  This problem plagued Bonhoeffer’s conscience and provoked him to write a number of letters and essays espousing a newly reformed Christianity he called “religionless Christianity.”

Fundamental to Bonhoeffer’s argument was a concept he adopted from Karl Barth, that of “religion as idolatry.”  Idolatry, according to Barth and Bonhoeffer, occurs when human beings reject the “infinite qualitative distinction” between the absolute goodness of God and the flawed nature of man, and instead worship a god that is created in the image of man.  Under idolatry, human beings worship themselves, their nations, their political parties, and their churches, claiming that these human organizations speak for God or are carrying out God’s will, even when the greatest of crimes are being committed.  In his posthumously published Letters and Papers from Prison, Bonhoeffer noted, “. . .my fear and distrust of ‘religiosity’ have become greater than ever here.  The fact that the Israelites never uttered the name of God always makes me think, and I can understand it better as I go on.”

It is important to note that Bonhoeffer’s “religionless Christianity” was not  a rejection of faith in God and Christ but a rejection of attempts to claim divine status for ordinary humans and human institutions.  In Bonhoeffer’s view, we don’t need the institutions of religion, which are easily subverted and perverted for evil purposes.  We simply need faith in God, worship, and prayer.  The church itself is secondary and not nearly as important as the individual’s relationship to God.

For Bonhoeffer, “religionless Christianity” was in part an attempt to make the best of a bad situation.  With progress in the sciences and technology making the universe more understandable and life easier to endure, human beings no longer needed God to explain certain mysteries or to cope with suffering.  According to Bonhoeffer, man was “grown up” and could solve many of his problems with technology.  It was no use invoking a “God of the gaps” to account for the remaining problems of humankind, because science could well eventually solve many of those problems as well.

What science and technology could not solve, however, was mankind itself and its tendency to evil, especially when acting in social organizations.  The Nazis excelled with science and technology — they built cutting-edge weapons such as jets and rockets, and their extermination camps were highly efficient in murdering millions at the lowest possible cost.  Man could conquer nature, but how was man to conquer himself?  Christianity in Germany should have been able to address this problem, but the churches only sought self-preservation, and the worship of God was perverted into worship of the German nation and the Fuhrer.  The core meaning of Christianity was lost.  Only the shell of Christianity, in the form of the rituals and the churches, remained.

What was the core meaning of Christianity?  In Bonhoeffer’s view, Christianity was fundamentally about attaining a new life by existing for others and participating in the sufferings of Jesus.  In Bonhoeffer’s words:  “It is not the religious act that makes the Christian, but participation in the sufferings of God in the secular life. . . . The ‘religious act’ is always something partial; ‘faith’ is something whole, involving the whole of one’s life.  Jesus calls men, not to a new religion, but to life.”

Bonhoeffer’s view of the future of the Christian Church was quite radical.  In his notes for a book he was writing while in prison, he wrote:

The church is the church only when it exists for others.  To make a start, it should give away all its property to those in need.  The clergy must live solely on the free-will offerings of their congregations, or possibly engage in some secular calling.  The church must share in the secular problems of ordinary human life, not dominating, but helping and serving.  It must tell men of every calling what it means to live in Christ, to exist for others.  In particular, our own church will have to take the field against the vices of hubris, power-worship, envy, and humbug, as the roots of all evil.  It will have to speak of moderation, purity, trust, loyalty, constancy, patience, discipline, humility, contentment, and modesty.  It must not under-estimate the importance of human example (which has its origin in the humanity of Jesus and is so important in Paul’s teaching); it is not abstract argument, but example, that gives its word emphasis and power.

Bonhoeffer’s views would probably appeal today to people who reject the label “Christian” and instead call themselves “followers of Jesus.”  These people are unhappy with the narrow-mindedness of many Christian churches and their involvement in politics; many of these “followers of Jesus” do not even go to church.  But they are drawn to Jesus’s teachings and the example of his love and self-sacrifice.

As for myself, I find a lot of merit to Bonhoeffer’s view of “religionless Christianity.”  But I also see several obstacles to its widespread adoption.  For one, Bonhoeffer’s vision does not appeal to those outside the Christian faith.  Bonhoeffer was fairly insistent that the Christian faith was not just another religion, but in fact a replacement for all religions.  God revealed himself in Christ, and that was that.  Second, the question of what God requires of us when we face particular political and social controversies is not going to be clear all the time, or even most of the time.  People of legitimate and honest Christian conscience may find themselves on opposite sides when faced with questions of war, the duties of the citizen to their government, the proper economic policy, the justice of the laws, etc.  At best, Christ provides general guidance, not specific guidance, and even good Christians may find themselves on different sides of an issue because of different views on the specifics of policy.   Finally, the notion of living for others and suffering with Christ is a noble goal, but extremely difficult, if not impossible, for most people.  We rightly honor Bonhoeffer for following Christ in martyrdom, but how many of us are really willing to become martyrs?  Few, I bet.  Still, even if we only emulate Christ partially and imperfectly, I suppose that is better than nothing, and considerably better than emulating the wrong person.

 

Religion as a Source of Evil

That religious individuals and institutions have committed great evils in the past is a fact not disputed by most intelligent persons with a good understanding of history.  What is disputed is the question of how much evil in history religion has actually been responsible for, and how to weigh that evil against the good that religion has done.

A number of contemporary atheist authors such as Sam Harris and Christopher Hitchens focus intensely, even obsessively, on the evils committed by religion.  The message of their books is that not only is religion mostly evil, but that most of the evils committed by human beings historically can be attributed to religion and, more broadly, to a deficiency of reason.  They point to the role of religion in slavery, massacre, torture, ethnic conflict and genocide, racism, and antisemitism.  In response to the argument that secular regimes under the the Nazis and Communists have also been responsible for these same evils, Harris and Hitchens point to the willing collaboration of religious authorities and institutions with the Nazis.  Both authors also argue that secular dictatorships suffered from a deficiency of reason similar to that of religious faith.  A greater commitment to reason and to evidence as the basis for belief, in their view, would do much to end evils committed by both religious and secular movements and regimes.

There is a good deal of truth to these arguments.  The world would be much improved if superstitions and incorrect beliefs about other human beings, ethnic groups, and societies could be eliminated.  But ultimately Harris and Hitchens do not seem to understand, or even take interest in, the deeper causes of evil in human beings.

The problem with viewing evil as being simply an outcome of irrationality is that it overlooks the powerful tendency of reason itself to be a tool of self-interest and self-aggrandizement.  Human beings commit evil not so much because they are irrational, but because they use reason to pursue and justify their desires.  It is the inherent self-centeredness of human beings that is the source of evil, not the belief systems that enable the pursuit and justification of self-interest.  Individual and group desires for wealth, power, influence, fame, prestige, and the fear of defeat and shame — these are the causes of social conflict, violence, and oppression.

Harris and Hitchens point to Biblical sanctions for slavery, and imply that slavery would not have existed if it were not for religion.  But is it not the case that slavery was ultimately rooted in the human desire for a life of wealth and ease, and that one path to such a life in the pre-industrial era was to force others to work for one’s self?  Is it not also the case that human conflicts over land were (and are) rooted in the same desire for wealth, and that violent conflicts over social organization have been rooted in clashing visions over who is to hold power?  Religion has been implicated in slavery and social conflicts, but religion has not been the main cause.

It is worth quoting James Madison on the perennial problem of oppression and violence:

 As long as the reason of man continues [to be] fallible, and he is at liberty to exercise it, different opinions will be formed. As long as the connection subsists between his reason and his self-love, his opinions and his passions will have a reciprocal influence on each other; and the former will be objects to which the latter will attach themselves. . . .

The latent causes of faction are thus sown in the nature of man; and we see them everywhere brought into different degrees of activity, according to the different circumstances of civil society. A zeal for different opinions concerning religion, concerning Government, and many other points, as well of speculation as of practice; an attachment to different leaders ambitiously contending for preëminence and power; or to persons of other descriptions whose fortunes have been interesting to the human passions, have, in turn, divided mankind into parties, inflamed them with mutual animosity, and rendered them much more disposed to vex and oppress each other, than to coöperate for their common good. So strong is this propensity of mankind to fall into mutual animosities, that where no substantial occasion presents itself, the most frivolous and fanciful distinctions have been sufficient to kindle their unfriendly passions, and excite their most violent conflicts.  (Federalist, No. 10)

In Madison’s view, religion was but one source of conflict and oppression, which was ultimately rooted in the problem of differing opinions among humans, arising out of human beings’ inevitable fallibility and self-love.

A number of contemporary science experiments have demonstrated the truth of Madison’s insight.  On contentious issues ranging from global warming to gun control, people of greater intelligence tend to be more passionately divided on these issues than people of lesser intelligence, and more likely to interpret evidence in a way that supports the conclusions of the groups to which they belong.  Higher intelligence did not lead to more accurate conclusions but a greater ability to interpret evidence in a way that supported pre-existing beliefs and group preferences.

Sam Harris himself displays tendencies toward extreme intolerance in his book that would make one leery of the simple claim that an enthusiastic commitment to reason would do much to end violence and oppression.  In his book The End of Faith, Harris declares that “[s]ome propositions are so dangerous that it may even be ethical to kill people for believing them” (pp. 52-53); he calls for imposing a “benign dictatorship” on backward societies as a means of self-defense (pp. 150-51); and he defends the use of torture on both military prisoners and criminal suspects in certain cases  (p. 197).  Harris even writes dreamily of what might have been if “some great kingdom of Reason emerged at the time of the Crusades and pacified the credulous multitudes of Europe and the Middle East.  We might have had modern democracy and the Internet by the year 1600.” (p. 109)  One can just imagine Sam Harris as the leader of this great “kingdom of Reason,” slaughtering, oppressing, and torturing ignorant, superstitious masses, all for the sake of lasting peace and progress.  Of course, there is nothing new about this dream.  It was the dream of the Jacobins and of the Communists as well, which was a nightmare for all who opposed them.

Edmund Burke, a keen observer of the Jacobin mentality, correctly noted why it was mistaken to believe that abolishing religion would do much to eliminate evil in the world:

 History consists, for the greater part, of the miseries brought upon the world by pride, ambition, avarice, revenge, lust, sedition, hypocrisy, ungoverned zeal, and all the train of disorderly appetites, which shake the public with the same

‘troublous storms that toss
The private state, and render life unsweet.’

These vices are the causes of those storms. Religion, morals, laws, prerogatives, privileges, liberties, rights of men, are the pretexts. The pretexts are always found in some specious appearance of a real good. You would not secure men from tyranny and sedition by rooting out of the mind the principles to which these fraudulent pretexts apply? If you did, you would root out everything that is valuable in the human breast. As these are the pretexts, so the ordinary actors and instruments in great public evils are kings, priests, magistrates, senates, parliaments, national assemblies, judges, and captains. You would not cure the evil by resolving that there should be no more monarchs, nor ministers of state, nor of the Gospel,—no interpreters of law, no general officers, no public councils. You might change the names: the things in some shape must remain. A certain quantum of power must always exist in the community, in some hands, and under some appellation. Wise men will apply their remedies to vices, not to names,—to the causes of evil, which are permanent, not to the occasional organs by which they act, and the transitory modes in which they appear. Otherwise you will be wise historically, a fool in practice. (Reflections on the Revolution in France)

Nevertheless, even if we accept Burke’s contention that the evils of religion lie in human nature and not in religion itself, there remains one  question:  shouldn’t we expect more of religion?  If religion doesn’t make people better, and simply reflects human nature, then what good is it?

To that question, I have to say that I honestly do not know.  History offers such a superabundance of both the good and ill effects of religious beliefs and institutions that I cannot fairly weigh the evidence.  In addition, widespread atheism is still a relatively new phenomenon in history, so I find it difficult to judge the long-term effects of atheism.  It is true that atheist regimes have committed many atrocities, but it is also the case that atheism is widespread in modern European democracies, and those countries are free of massacre and oppression, and have lower crime rates than the more religious United States.

Perhaps we should consider the views of one Christian who personally witnessed the catastrophic capitulation of Christian churches to the Nazi regime in the 1930s and decided to become a dissenter to the Nazi regime and the German Christian establishment that supported the Nazis.  Dietrich Bonhoeffer, who was executed by the Nazis in the waning days of World War Two, proposed a newly reformed Christianity that would indeed fulfill the role of making human beings better.  I will critically evaluate Bonhoeffer’s proposal in a future post.

Belief and Evidence

A common argument by atheists is that belief without evidence is irrational and unjustified, and that those arguing for the existence of God have the burden of proof.  Bertrand Russell famously argued that if one claims that there is a teapot orbiting the sun, the burden of proving the existence of the teapot is on the person who asserts the existence of the teapot, not the denier.  Christopher Hitchens has similarly argued that “What can be asserted without evidence can also be dismissed without evidence.”  Hitchens has advanced this principle even further, arguing that “exceptional claims demand exceptional evidence.”  (god is not Great, pp. 143, 150)  Sam Harris has argued that nearly every evil in human history “can be attributed to an insufficient taste for evidence” and that “We must find our way to a time when faith, without evidence, disgraces anyone who would claim it.”   (The End of Faith, pp. 25, 48)

A demand for evidence is surely a legitimate requirement for most ordinary claims.  But it would be a mistake to turn this rule into a rigid and universal requirement, because many of the issues and problems we encounter in our lives are not always rich with evidence.  Some issues have a wealth of evidence, some issues have a small amount of indirect or circumstantial evidence, some issues have evidence compatible with a variety of radically different conclusions, and some issues have virtually no evidence.  What’s worse is that there appears to be an inverse relationship between the size and importance of the issue one is addressing and the amount of evidence that is available.  The bigger the question one has, the less evidence there is to address it.  The questions of how to obtain a secure and steady supply of food, water, and shelter, how to extend the human lifespan and increase the economic standard of living, all have scientific-technological answers backed by abundant evidence.  Other issues, such as the origins of the universe, the nature of the elementary particles, and the evolution of life, also have large amounts of evidence, albeit with significant gaps in certain details.  But some of the most important questions we face have such a scarcity of evidence that a variety of conflicting beliefs seems inevitable.  Why does the universe exist?  Is there intelligent life on other planets, and if so, how many planets have such life?  Where did the physical laws of the universe come from?  What should we do with our lives?  Will the human race survive the next 1000 years?  Are our efforts to be good people and follow moral codes all in vain?

In cases of scarce evidence, to demand that sufficient evidence exist before forming a belief is to put the cart before the horse.  If one looks at the origins and growth of knowledge in human civilization, belief begins with imagination — only later are beliefs tested and challenged.  Without imagination, there are no hypotheses to test.  In fact, one would not know what evidence to gather if one did not begin with a belief.  Knowledge would never advance.  As the philosopher George Santayana argued in his book Reason and Religion,

A good mythology cannot be produced without much culture and intelligence. Stupidity is not poetical. . . . The Hebrews, denying themselves a rich mythology, remained without science and plastic art; the Chinese, who seem to have attained legality and domestic arts and a tutored sentiment without passing through such imaginative tempests as have harassed us, remain at the same time without a serious science or philosophy. The Greeks, on the contrary, precisely the people with the richest and most irresponsible myths, first conceived the cosmos scientifically, and first wrote rational history and philosophy. So true it is that vitality in any mental function is favourable to vitality in the whole mind. Illusions incident to mythology are not dangerous in the end, because illusion finds in experience a natural though painful cure. . . .  A developed mythology shows that man has taken a deep and active interest both in the world and in himself, and has tried to link the two, and interpret the one by the other. Myth is therefore a natural prologue to philosophy, since the love of ideas is the root of both.

Modern critics of traditional religion are right to argue that we need to revise, reinterpret, or abandon myths when they conflict with new evidence.  As astronomy advanced, it was necessary to abandon the geocentric model of the universe.   As the evidence for evolution accumulated, it was no longer plausible to believe that the universe was created in the extremely short span of six days.  There is a difference between a belief formed in the face of a scarcity of evidence and a belief that goes against an abundance of evidence.  The former is permitted, and is even necessary to advance knowledge; the latter takes knowledge backward.

Today we have reached the point at which science is attempting to answer some very large questions, and science is running up against the limits of what is possible with observation, experimentation, and verification.  Increasingly, the scientific imagination is developing theories that are plausible, but have little or no evidence to back them up; in fact, for many of these theories we will probably never have sufficient evidence.  I am referring here to cosmological theories about the origins of the universe that propose a “multiverse,” that is, a large or even infinite collection of universes that exist alongside our own observable universe.

There are several different types of multiverse theories.  The first type, which many if not most cosmologists accept, proposes multiple universes with the same physical laws and constants as ours, but with different distributions of matter.  A second type, which is more controversial, proposes an infinite number of universes with different physical laws and constants.  A third type, also controversial, arises out of the “many worlds” interpretation of quantum physics — in this view, every time an indeterminate event occurs (say, a six-sided die comes up a “four”), an entirely new universe splits off from our own.  Thus, the most extreme multiverse theories claim that all possibilities exist in some universe, somewhere.  There are even an infinite number of people like you, each with a slight variation in life history (i.e., turning left instead of turning right when leaving the house this morning).

The problem with these theories, however, is that is impossible to obtain solid evidence on the existence of other universes through observation — the universes either exist far beyond the limits of our observable universe, or they reside on a different branch of reality that we cannot reach.  Now it’s not unusual for a scientific theory to predict the existence of particles or forces or worlds that we cannot yet observe; historically, a number of such predictions have proved true when the particle or force or world was finally observed.  But many other predictions have not been proved true.  With the multiverse, it is unlikely that we will have definitive evidence one way or the other.  And a number of scientists have revolted at this development, arguing that cosmology at this level is no longer scientific.  According to physicist Paul Davies,

Extreme multiverse explanations are therefore reminiscent of theological discussions. Indeed, invoking an infinity of unseen universes to explain the unusual features of the one we do see is just as ad hoc as invoking an unseen Creator. The multiverse theory may be dressed up in scientific language, but in essence it requires the same leap of faith.

Likewise, Freeman Dyson insists:

[T]he multiverse is philosophy and not science. Science is about facts that can be tested and mysteries that can be explored, and I see no way of testing hypotheses of the multiverse. Philosophy is about ideas that can be imagined and stories that can be told. I put narrow limits on science, but I recognize other sources of human wisdom going beyond science. Other sources of wisdom are literature, art, history, religion, and philosophy. The multiverse has its place in philosophy and in literature.

Cosmologist George F.R. Ellis, in the August 2011 issue of Scientific American, notes that there are several ways of indirectly testing for the existence of multiple universes, but none are likely to be definitive.  He concludes: “Nothing is wrong with scientifically based philosophical speculation, which is what multiverse proposals are.  But we should name it for what it is.”

Given the thinness of the evidence for extreme multiverse theories, one might ask why modern day atheists do not seem to attack and mock such theorists for believing in something for which they cannot provide solid evidence.  At the very least, Christopher Hitchens’s claim that “exceptional claims require exceptional evidence” would seem to invalidate belief in any multiverse theory.  At best, at some future point we may have indirect or circumstantial evidence for the existence of some other universes; but we are never going to have exceptional evidence for an infinite number of universes consisting of all possibilities.  So why do we not hear of insulting analogies involving orbiting teapots and flying spaghetti monsters when some scientists propose an infinite number of universes based on different physical laws or an infinite number of versions of you?  I think it’s because scientists are respected authority figures in a modern, secular society.  If a scientist says there are multiple universes, we are inclined to believe them even in the absence of solid evidence, because scientists have social prestige, especially among atheists.

Ultimately, there is no solid evidence for the existence of God, no solid evidence for the existence of an infinite variety of universes, and no solid evidence for the existence of other versions of me.  Whether or not one chooses to believe any of these propositions depends on whether one decides to leap into the dark, and which direction one decides to  leap.  This does not mean that any religious belief is permissible — on issues which have abundant evidence, beliefs cannot go against evidence.  Evolution has abundant evidence, as does modern medical science, chemistry, and rocket science.  But where evidence is scarce, and a variety of beliefs are compatible with existing evidence, holding a particular belief cannot be regarded as wholly unjustified and irrational.