The Mythos of Mathematics

‘Modern man has his ghosts and spirits too, you know.’

‘What?’

‘Oh, the laws of physics and of logic . . . the number system . . . the principle of algebraic substitution. These are ghosts. We just believe in them so thoroughly they seem real.’

Robert Pirsig, Zen and the Art of Motorcycle Maintenance

 

It is a popular position among physicists that mathematics is what ultimately lies behind the universe. When asked for an explanation for the universe, they point to numbers and equations, and furthermore claim that these numbers and equations are the ultimate reality, existing objectively outside the human mind. This view is known as mathematical Platonism, after the Greek philosopher Plato, who argued that the ultimate reality consisted of perfect forms.

The problem we run into with mathematical Platonism is that it is subject to some of the same skepticism that people have about the existence of God, or the gods. How do we know that mathematics exists objectively? We can’t sense mathematics directly; we only know that it is a useful tool for dealing with reality. The fact that math is useful does not prove that it exists independently of human minds. (For an example of this skepticism, see this short video).

Scholars George Lakoff and Rafael Nunez, in their book Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being, offer the provocative and fascinating thesis that mathematics consists of metaphors. That is, the abstractions of mathematics are ultimately grounded in conceptual comparisons to concrete human experiences. In the view of Lakoff and Nunez, all human ideas are shaped by our bodily experiences, our senses, and how these senses react to our environment. We try to make sense of events and things by comparing them to our concrete experiences. For example, we conceptualize time as a limited resource (“time is money”); we conceptualize status or mood in terms of space (happy is “up,” while sad is “down”); we personify events and things (“inflation is eating up profits,” “we must declare war on poverty”). Metaphors are so prevalent and taken for granted, that most of the time we don’t even notice them.

Mathematical systems, according to Lakoff and Nunez, are also metaphorical creations of the human mind. Since human beings have common experiences with space, time, and quantities, our mathematical systems are similar. But we do have a choice in the metaphors we use, and that is where the creative aspect of mathematics comes in. In other words, mathematics is grounded in common experiences, but mathematical conceptual systems are creations of the imagination. According to Lakoff and Nunez, confusion and paradoxes arise when we take mathematics literally and don’t recognize the metaphors behind mathematics.

Lakoff and Nunez point to a number of common human activities that subsequently led to the creation of mathematical abstractions. The collection of objects led to the creation of the “counting numbers,” otherwise known as “natural numbers.” The use of containers led to the notion of sets and set theory. The use of measuring tools (such as the ruler or yard stick) led to the creation of the “number line.” The number line in turn was extended to a plane with x and y coordinates (the “Cartesian plane“). Finally, in order to understand motion, mathematicians conceptualized time as space, plotting points in time as if they were points in space — time is not literally the same as space, but it is easier for human beings to measure time if it is plotted on a spatial graph.

Throughout history, while the counting numbers have been widely accepted, there have been controversies over the creation of other types of numbers. One of the reasons for these controversies is the mistaken belief that numbers must be objectively real rather than metaphorical. So the number zero was initially controversial because it made no sense to speak literally of having a collection of zero objects. Negative numbers were even more controversial because it’s impossible to literally have a negative number of objects. But as the usefulness of zero and negative numbers as metaphorical expressions and in performing calculations became clear, these numbers became accepted as “real” numbers.

The metaphor of the measuring stick/number line, according to Lakoff and Nunez, has been responsible for even more controversy and confusion. The basic problem is that a line is a continuous object, not a collection of objects. If one makes an imaginative metaphorical leap and envisions the line as a collection of objects known as segments or points, that is very useful for measuring the line, but a line is not literally a collection of segments or points that correspond to objectively existing numbers.

If you draw three points on a piece of paper, the sum of the collection of points clearly corresponds to the number three, and only the number three. But if you draw a line on a piece of paper, how many numbers does it have? Where do those numbers go? The answer is up to you, depending on what you hope to measure and how much precision you want. The only requirement is that the numbers are in order and the length of the segments is consistently defined. You can put zero on the left side of the line, the right side of the line, or in the middle. You can use negative numbers or not use negative numbers. The length of the segments can be whatever you want, as long as the definitions of segment length are consistent.

The number line is a great mental tool, but it does not objectively exist, outside of the human mind. Neglecting this fact has led to paradoxes that confounded the ancient Greeks and continue to mystify human beings to this day. The first major problem arose when the Greeks attempted to determine the ratio of the sides of a particular polygon and discovered that the ratio could not be expressed as a ratio of whole numbers, but rather as an infinite, nonrepeating decimal. For example, a right triangle with two shorter sides of length 1 would, according to the Pythagorean theorem, have a hypotenuse length equivalent to the square root of 2, which is an infinite decimal: 1.41421356. . .  This scandalized the ancient Greeks at first, because many of them had a religious devotion to the idea that whole numbers existed objectively and were the ultimate basis of reality. Nevertheless, over time the Greeks eventually accepted the so-called “irrational numbers.”

Perhaps the most famous irrational number is pi, the measure of the ratio between the circumference of a circle and its diameter: 3.14159265. . . The fact that pi is an infinite decimal fascinates people to no end, and scientists have calculated the value of pi to over 13 trillion digits. But the digital representation of pi has no objective existence — it is simply a creation of the human imagination based on the metaphor of the measuring stick / number line. There’s no reason to be surprised or amazed that the ratio of the circumference of a circle to its diameter is an infinite decimal; lines are continuous objects, and expressing lines as being composed of discrete objects known as segments is bound to lead to difficulties eventually. Moreover, pi is not necessary for the existence of circles. Even children are perfectly capable of drawing circles without knowing the value of pi. If children can draw circles without knowing the value of pi, why should the universe need to know the value of pi? Pi is simply a mental tool that human beings created to understand the ratio of certain line lengths by imposing a conceptual framework of discrete segments on a continuous quantity. Benjamin Lee Buckley, in his book The Continuity Debate, underscores this point, noting that one can use discrete tools for measuring continuity, but that truly continuous quantities are not really composed of discrete objects.

It is true that mathematicians have designated pi and other irrational numbers as “real” numbers, but the reality of the existence of pi outside the human mind is doubtful. An infinitely precise pi implies infinitely precise measurement, but there are limits to how precise one can be in reality, even assuming absolutely perfect measuring instruments. Although pi has been calculated to over 13 trillion digits, it is estimated that only 39 digits are needed to calculate the volume of the known universe to the precision of one atom! Furthermore, the Planck length is the smallest measurable length in the universe. Although quite small, the Planck length sets a definite limit on how precise pi can be in reality. At some point, depending on the size of the circle one creates, the extra digits in pi are simply meaningless.

Undoubtedly, the number line is an excellent mental tool. If we had perfect vision, perfect memory, and perfect eye-hand coordination, we wouldn’t need to divide lines into segments and count how many segments there are. But our vision is imperfect, our memories fallible, and our eye-hand coordination is imperfect. That is why we need to use versions of the number line to measure things. But we need to recognize that we are creating and imposing a conceptual tool on reality. This tool is metaphorical and, while originating in human experience, it is not reality itself.

Lakoff and Nunez point to other examples of metaphorical expressions in mathematics, such as the concept of infinity. Mathematicians discuss the infinitely large, the infinitely small, and functions in calculus that come infinitely close to some designated limit. But Lakoff and Nunez point out that the notion of actual (literal) infinity, as opposed to potential infinity, has been extremely problematic, because calculating or counting infinity is inherently an endless process. Lakoff and Nunez argue that envisioning infinity as a thing, or the result of a completed process, is inherently metaphorical, not literal. If you’ve ever heard children use the phrase “infinity plus one!” in their taunts, you can see some of the difficulties with envisioning infinity as a thing, because one can simply take the allegedly completed process and start it again. Oddly, even professional mathematicians don’t agree on the question of whether “infinity plus one” is a meaningful statement. Traditional mathematics says that infinity plus one is still infinity, but there are more recent number systems in which infinity plus one is meaningful. (For a discussion of how different systems of mathematics arrive at different answers to the same question, see this post.)

Nevertheless, many mathematicians and physicists fervently reject the idea that mathematics comes from the human mind. If mathematics is useful for explaining and predicting real world events, they argue, then mathematics must exist in objective reality, independent of human minds. But why is it important for mathematics to exist objectively? Isn’t it enough that mathematics is a useful mental tool for describing reality? Besides, if all the mathematicians in the world stopped all their current work and devoted themselves entirely to proving the objective existence of mathematical objects, I doubt that they would succeed, and mathematical knowledge would simply stop progressing.

Scientific Evidence for the Reality of Mysticism

What exactly is mysticism? One of the problems with defining and evaluating mysticism is that mystical experiences seem to be inherently personal and unobservable to outsiders. Certainly, one can observe a person meditate and then record what that person says about his or her experience while meditating, but what is real about that experience? Persons undergoing such experiences often describe a feeling of oneness with the universe, love for all creation, and communion with the divine. But anyone can dream or daydream or imagine. It would be one thing if mystics came up with great ideas during their meditative states — a cure for a disease, a viable plan for peace between warring states, or a better political system. But this generally does not happen. Mystical experiences remain personal.

Recently, however, brain scan technologies, along with knowledge of the different functional areas of the human brain, have allowed scientists for the first time to actually observe what is going on in the brains of people who are undergoing mystical experiences. And the findings are remarkable.

Studies of persons engaged in prayer or meditation indicate that the frontal lobes of participants’ brains, responsible for concentration, light up during meditation– an unsurprising conclusion. However, at the same time, the parietal lobes of the same brains go dark — these sections of the brain are responsible for an individual’s sense of self, and help a person orient him or herself to the world. So when a person claims that they experience a oneness with the universe, that appears to be exactly what is going on — the person’s sense of self is actually put to sleep. And when the sense of self disappears, so does egocentrism. Researchers have found that people who regularly engage in meditation literally reshape their brains, becoming both more attentive and compassionate. The particular religion they belong to did not matter — Buddhist, Christian, Sikh — all seemed to experience the same changes in the brain.

Research on psychedelic drugs has found evidence that psychedelics act as potent enablers of mystical experience. Psilocybin, the active chemical in “magic mushrooms,” and mescaline, from the cactus known as peyote, are psychedelics that have been used for thousands of years in Native American religious ceremonies. LSD was synthesized in 1938 from a fungus, ergot, that may have played a role as a hallucinogen in ancient Greek religions. What these chemicals have in common is that they all seem to have effects on the brain similar to what may be experienced during deep meditation: they dissolve the sense of self and enable extraordinary visions that appear to give people a radically new perspective on their lives. One research subject described her experience on psilocybin as follows: “I know that I had a merging with what I call oneness. . . . There was a time that I was being gently pulled into it, and I saw it as light. . . . It isn’t even describable. It’s not just light; it’s love.” In fact, two-thirds of study participants who received psilocybin ranked their psychedelic experience as being among the top five most spiritually significant experiences of their lives, comparable to the birth of a child or the death of a parent.

Psilocybin has even been used to treat addiction — the mystical experience seems to reboot the brain, allowing people to break old, ingrained habits. A study of fifteen smokers who had failed multiple treatments for addiction found that after therapy sessions with psilocybin, 80 percent were able to quit cigarettes for at least 6 months, an unprecedented success rate. Smokers who seemed to have a more complete mystical experience had the greatest success quitting. According to one subject, “smoking seemed irrelevant, so I stopped.” Cancer patients who underwent treatment with psilocybin had a reduction in anxiety and distress.

Brain scans of those undergoing mystical experiences under psychedelics indicate reduced activity in the part of the brain known as the “default-mode network.” This high-level part of the brain acts as something of a corporate executive for the brain. It inhibits lower-level brain functions such as emotion and memory and also creates a sense of self, enabling persons to distinguish themselves from other people and from the rest of the world. When psychedelics suppress the default-mode network, the lower brain regions are unleashed, leading to visions that may be bizarre but, in many cases, insightful. And the sense of self disappears, as one feels a merging with the rest of the world.

It’s important not to overstate the findings of these scientific studies, by citing spiritual experiences as justification for theism. These studies do not prove that God exists or that there is a supernatural dimension. The visions that people experience while under psychedelics are often chaotic and meaningless. But this sort of radical free association does seem to help people attain new perspectives and enhance their openness to new ideas. And the feeling of oneness with the universe, the dissolution of the self, is not just an unconfirmed claim — it really does seem to be supported by brain scan studies. So the mystical experience is not superstitious pre-scientific thinking, but a valid mode of thought, one which many of us, including myself, have dismissed without even trying.

Scientific Evidence for the Benefits of Faith

Increasingly, scientific studies have recognized the power of positive expectations in the treatment of people who are suffering from various illnesses. The so-called “placebo” effect is so powerful that studies generally try to control for it: fake pills, fake injections, or sometimes even fake surgeries will be given to one group while another group is offered the “real” treatment. If the real drug or surgery is no better than the fake drug/surgery, then the treatment is considered a failure. What has not been recognized until relatively recently is how the power of positive expectations should be considered as a form of treatment in itself.

Recently, Harvard University has established a Program in Placebo Studies and the Therapeutic Encounter in order to study this very issue. For many scientists, the power of the placebo has been a scandal and an embarrassment, and the idea of offering a “fake” treatment to a patient seems to go against every ethical and professional principle. But the attitude of Ted Kaptchuk, head of the Harvard program, is that if something works, it’s worth studying, no matter how crazy and irrational it seems.

In fact, “crazy” and “irrational” seem to be apt words to describe the results of research on placebos. Researchers have found differences in the effectiveness of placebos based merely on appearance — large pills are more effective than small pills; two pills are better than one pill; “brand name” pills are more effective than generics; capsules are better than pills; and injections are the most effective of all! Even the color of pills affects the outcome. One study found that the most famous anti-anxiety medication in the world, Valium, has no measurable effect on a person’s anxiety unless the person knows he or she is taking it (see “The Power of Nothing” in the Dec. 12 2011 New Yorker). The placebo is probably the oldest and simplest form of “faith healing” there is.

There are scientists who are critical of many of these placebo studies; they believe the power of placebos has been greatly exaggerated. Several studies have concluded that the placebo effect is small or insignificant, especially when objective measures of patient improvement are used instead of subjective self-reports.

However, it should be noted that the placebo effect is not simply a matter of patient feelings that are impossible to measure accurately — there is actually scientific evidence that the human brain manufactures chemicals in response to positive expectations. In the 1970s, it was discovered that people who reported a reduction in pain in response to a placebo were actually producing greater amounts of endorphins, a substance in the brain chemically similar to morphine and heroin that reduces pain and is capable of producing feelings of euphoria (as in the “runner’s high“). Increasingly, studies of the placebo effect have relied on brain scans to actually track changes in the brain in response to a patient receiving a placebo, so measurement of effects is not merely a matter of relying on what a person says. One recent study found that patients suffering from Parkinson’s disease responded better to an “expensive” placebo than a “cheaper” placebo. Patients were given injections containing nothing but saline water, but the arm of patients that was told the saline solution cost $1500 per dose experienced significantly better improvements in motor function than patients that were given a “cheaper” placebo! This happens because the placebo effect boosts the brain’s production of dopamine, which counteracts the effects of Parkinson’s disease. Brain scans have confirmed greater dopamine activation in the brains of those given placebos.

Other studies have confirmed the close relation between the health of the human mind and the health of the body. Excessive stress weakens the immune system, creating an opening for illness. People who regularly practice meditation, on the other hand, can strengthen their immune system and as result, catch colds and the flu less often. The health effects of mediation do not depend on the religion of those practicing it — Buddhist, Christian, Sikh. The mere act of meditation is what it important.

Why has modern medicine been so slow and reluctant to acknowledge the power of positive expectations and spirituality in improving human health? I think it’s because modern science has been based on certain metaphysical assumptions about nature which have been very valuable in advancing knowledge historically, but are ultimately limited and flawed. These assumptions are: (1) Anything that exists solely in the human mind is not real; (2) Knowledge must be based on what exists objectively, that is, what exists outside the mind; and (3) everything in nature is based on material causation — impersonal objects colliding with or forming bonds with other impersonal objects. In many respects, these metaphysical assumptions were valuable in overcoming centuries of wrong beliefs and superstitions. Scientists learned to observe nature in a disinterested fashion, discover how nature actually was and not how we wanted it to be. Old myths about gods and personal spirits shaping nature became obsolete, to be replaced by theories of material causation, which led to technological advances that brought the human race enormous benefits.

The problem with these metaphysical assumptions, however, is that they draw too sharp a separation between the human mind and what exists outside the mind. The human mind is part of reality, embedded in reality. Scientists rely on concepts created by the human mind to understand reality, and multiple, contradictory concepts and theories may be needed to understand reality.  (See here and here). And the human mind can modify reality – it is not just a passive spectator. The mind affects the body directly because it is directly connected to the body. But the mind can also affect reality by directing the limbs to perform certain tasks — construct a house, create a computer, or build a spaceship.

So if the human mind can shape the reality of the body through positive expectations, can positive expectations bring additional benefits, beyond health? According to the American philosopher William James in his essay “The Will to Believe,” a leap of faith could be justified in certain restricted circumstances: when a momentous decision must be made, there is a large element of uncertainty, and there are not enough resources and time to reduce the uncertainty. (See this post.) In James’ view, in some cases, we must take the risk of supposing something is true, lest we lose the opportunity of gaining something beneficial. In short, “Faith in a fact can help create that fact.”

Scientific research on how expectations affect human performance tends to support James’ claim. Performance in sports is often influenced by athletes’ expectations of “good luck.” People who are optimistic and visualize their ideal goals are more likely to actually attain their goals than people who don’t. One recent study found that human performance in a color discrimination task is better when the subjects are provided a lamp that has a label touting environmental friendliness. Telling people about stereotypes before crucial tests affects how well people perform on tests — Asians who are told about how good Asians are at math perform better on math tests; women who are sent the message that women are not as smart perform less well on tests. When golfers are told that winning golf is a matter of intelligence, white golfers improve their performance; when golfers are told that golf is a matter of natural athleticism, blacks do better.

Now, I am not about to tell you that faith is good in all circumstances and that you should always have faith. Applied across the board, faith can hurt you or even kill you. Relying solely on faith is not likely to cure cancer or other serious illnesses. Worshipers in some Pentecostal churches who handle poisonous snakes sometimes die from snake bites. And terrorists who think they will be rewarded in the afterlife for killing innocent people are truly deluded.

So what is the proper scope for faith? When should it be used and when should it not be used? Here are three rules:

First, faith must be restricted to the zone of uncertainty that always exists when evaluating facts. One can have faith in things that are unknown or not fully known, but one should not have faith in things that are contrary to facts that have been well-established by empirical research. One cannot simply say that one’s faith forbids belief in the scientific findings on evolution and the big bang, or that faith requires that one’s holy text is infallible in all matters of history, morals, and science.

Second, the benefits of faith cannot be used as evidence for belief in certain facts. A person who finds relief from Parkinson’s disease by imagining the healing powers of Christ’s love cannot argue that this proves that Jesus was truly the son of God, that Jesus could perform miracles, was crucified, and rose from the dead. These are factual claims that may or may not be historically accurate. Likewise with the golden plates of Joseph Smith that were allegedly the basis for the Book of Mormon or the ascent of the prophet Muhammad to heaven — faith does not prove any of these alleged facts. If there was evidence that one particular religious belief tended to heal people much better than other religious beliefs, then one might devote effort to examining if the facts of that religion were true. But there does not seem to be a difference among faiths — just about any faith, even the simplest faith in a mere sugar pill, seems to work.

Finally, faith should not run unnecessary risks. Faith is a supplement to reason, research, and science, not an alternative. Science, including medical science, works. If you get sick, you should go to a doctor first, then rely on faith. As the prophet Muhammad said, “Tie your camel first, then put your trust in Allah.”

Scientific Revolutions and Relativism

Recently, Facebook CEO Mark Zuckerberg chose Thomas Kuhn’s classic The Structure of Scientific Revolutions for his book discussion group. And although I don’t usually try to update this blog with the most recent controversy of the day, this time I can’t resist jumping on the Internet bandwagon and delving into this difficult, challenging book.

To briefly summarize, Kuhn disputes the traditional notion of science as one of cumulative growth, in which Galileo and Kepler build upon Copernicus, Newton builds upon Galileo and Kepler, and Einstein builds upon Newton. This picture of cumulative growth may be accurate for periods of “normal science,” Kuhn writes, when the community of scientists are working from the same general picture of the universe. But there are periods when the common picture of the universe (which Kuhn refers to as a “paradigm”) undergoes a revolutionary change. A radically new picture of the universe emerges in the community of scientists, old words and concepts obtain new meanings, and scientific consensus is challenged by conflict between traditionalists and adherents of the new paradigm. If the new paradigm is generally successful in solving new puzzles AND solving older puzzles that the previous paradigm solved, the community of scientists gradually moves to accept the new paradigm — though this often requires that stubborn traditionalists eventually die off.

According to Kuhn, science as a whole progressed cumulatively in the sense that science became better and better at solving puzzles and predicting things, such as the motions of the planets and stars. But the notion that scientific progress was bringing us closer and closer to the Truth, was in Kuhn’s view highly problematic. He felt there was no theory-independent way of saying what was really “out there” — conceptions of reality were inextricably linked to the human mind and its methods of perceiving, selecting, and organizing information. Rather than seeing science as evolving closer and closer to an ultimate goal, Kuhn made an analogy to biological evolution, noting that life evolves into higher forms, but there is no evidence of a final goal toward which life is heading. According to Kuhn,

I do not doubt, for example, that Newton’s mechanics improves on Aristotle’s and that Einstein’s improves on Newton’s as instruments for puzzle-solving. But I can see in their succession no coherent direction of ontological development. On the contrary, in some important respects, though by no means all, Einstein’s general theory of relativity is closer to Aristotle’s than either of them is to Newton’s. (Structure of Scientific Revolutions, postscript, pp. 206-7.)

This claim has bothered many. In the view of Kuhn’s critics, if a theory solves more puzzles, predicts more phenomena to a greater degree of accuracy, the theory must be a more accurate picture of reality, bringing us closer and closer to the Truth. This is a “common sense” conclusion that would seem to be irrefutable. One writer in Scientific American comments on Kuhn’s appeal to “relativists,” and argues:

Kuhn’s insight forced him to take the untenable position that because all scientific theories fall short of absolute, mystical truth, they are all equally untrue. Because we cannot discover The Answer, we cannot find any answers. His mysticism led him to a position as absurd as that of the literary sophists who argue that all texts — from The Tempest to an ad for a new brand of vodka — are equally meaningless, or meaningful. (“What Thomas Kuhn Really Thought About Scientific ‘Truth’“)

Many others have also charged Kuhn with relativism, so it is important to take some time to examine this charge.

What people seem to have a hard time grasping is what scientific theories actually accomplish. Scientific theories or models can in fact be very good at solving puzzles or predicting outcomes without being an accurate reflection of reality — in fact, in many cases theories have to be unrealistic in order to be useful! Why? A theory must accomplish several goals, but some of these goals are incompatible, requiring a tradeoff of values. For example, the best theories generalize as much as possible, but since there are exceptions to almost every generalization, there is a tradeoff between generalizability and accuracy. As Nancy Cartwright and Ronald Giere have pointed out, the “laws of physics” have many exceptions when matched to actual phenomena; but we cherish the laws of physics because of their wide scope: they subsume millions of observations under a small number of general principles, even though specific cases usually don’t exactly match the predictions of any one law.

There is also a tradeoff between accuracy and simplicity. Complete accuracy in many cases may require dozens of complex calculations; but most of the time, complete accuracy is not required, so scientists go with the simplest possible principles and calculations. For example, when dealing with gravity, Newton’s theory is much simpler than Einstein’s, so scientists use Newton’s equations until circumstances require them to use Einstein’s equations. (For more on theoretical flexibility, see this post.)

Finally, there is a tradeoff between explanation and prediction. Many people assume that explanation and prediction are two sides of the same coin, but in fact it is not only possible to predict outcomes without having a good causal model, sometimes focusing on causation gets in the way of developing a good predictive model. Why? Sometimes it’s difficult to observe or measure causal variables, so you build your model using variables that are observable and measurable even if those variables are merely associated with certain outcomes and may not cause those outcomes. To choose a very simple example, a model that posits that a rooster crowing leads to the rising of the sun can be a very good predictive model while saying nothing about causation. And there are actually many examples of this in contemporary scientific practice. Scientists working for the Netflix corporation on improving the prediction of customers’ movie preferences have built a highly valuable predictive model using associations between certain data points, even though they don’t have a true causal model. (See Galit Shmueli, “To Explain or Predict” in Statistical Science, 2010, vol. 25, no. 3)

Not only is there no single, correct way to make these value tradeoffs, it is often the case that one can end up with multiple, incompatible theories that deal with the same phenomena, and there is no obvious choice as to which theory is best. As Kuhn has pointed out, new theories become widely accepted among the community of scientists only when the new theory can account for anomalies in the old theory AND yet also conserve at least most of the predictions of the old theory. Even so, it is not long before even newer theories come along that also seem to account for the same phenomena equally well. Is it relativism to recognize this fact? Not really. Does the reality of multiple, incompatible theories mean that every person’s opinion is equally valid? No. There are still firm standards in science. But there can be more than one answer to a problem. The square root of 1,000,000 can be 1000 or -1000. That doesn’t mean that any answer to the square root of 1,000,000 is valid!

Physicist Stephen Hawking and philosopher Ronald Giere have made the analogy between scientific theories and maps. A map is an attempt to reduce a very large, approximately spherical, three dimensional object — the earth — to a flat surface. There is no single correct way to make a map, and all maps involve some level of inaccuracy and distortion. If you want accurate distances, the areas of the land masses will be inaccurate, and vice versa. With a small scale, you can depict large areas but lose detail. If you want to depict great detail, you will have to make a map with a larger scale. If you want to depict all geographic features, your map may become so cluttered with detail it is not useful, so you have to choose which details are important — roads, rivers, trees, buildings, elevation, agricultural areas, etc. North can be “up” on your map, but it does not have to be. In fact, it’s possible to make an infinite number of valid maps, as long as they are useful for some purpose. That does not mean that anyone can make a good map, that there are no standards. Making good maps requires knowledge and great skill.

As I noted above, physicists tend to prefer Newton’s theory of gravity rather than Einstein’s to predict the motion of celestial objects because it is simpler. There’s nothing wrong with this, but it is worth pointing out that Einstein’s picture of gravity is completely different from Newton’s. In Newton’s view, space and time are separate, absolute entities, space is flat, and gravity is a force that pulls objects away from the straight lines that the law of inertia would normally make them follow. In Einstein’s view, space and time are combined into one entity, spacetime, space and time are relative, not absolute, spacetime is curved in the presence of mass, and when objects orbit a planet it is not because the force of gravity is overcoming inertia (gravity is in fact a “fictitious force“), but because objects are obeying the law of inertia by following the curved paths of spacetime! In terms of prediction, Einstein’s view of gravity offers an incremental improvement to Newton’s, but Einstein’s picture of gravity is so radically different, Kuhn was right in seeing Einstein’s theory as a revolution. But scientists continue to use Newton’s theory, because it mostly retains the value of prediction while excelling in the value of simplicity.

Stephen Hawking explains why science is not likely to progress to a single, “correct” picture of the universe:

[O]our brains interpret the input from our sensory organs by making a model of the world. When such a model is successful at explaining events, we tend to attribute to it, and the elements and concepts that constitute it, the quality of reality or absolute truth. But there may be different ways in which one could model the same physical situation, with each employing different fundamental elements and concepts. If two such physical theories or models accurately predict the same events, one cannot be said to be more real than the other; rather we are free to use whichever model is more convenient.  (The Grand Design, p. 7)

I don’t think this is “relativism,” but if people insist that it is relativism, it’s not Kuhn who is the guilty party. Kuhn is simply exposing what scientists do.

Uncertainty, Debate, and Imprecision in Mathematics

If you remember anything about the mathematics courses you took in high school, it is that mathematics is the one subject in which there is absolute certainty and precision in all its answers. Unlike history, social science, and the humanities, which offer a variety of interpretations of subject matter, mathematics is unified and absolute.  Two plus two equals four and that is that. If you answer a math problem wrong, there is no sense in arguing a different interpretation with the teacher. Even the “hard sciences,” such as physics, may revise long-established conclusions, as new evidence comes in and new theories are developed. But mathematical truths are seemingly forever. Or are they?

You might not know it, but there has been a revolution in the human understanding of mathematics in the past 150 years that has undermined the belief that mathematics holds the key to absolute truth about the nature of the universe. Even as mathematical knowledge has increased, uncertainty has also increased, and different types of mathematics have been created that have different premises and are incompatible with each other. The value of mathematics remains clear. Mathematics increases our understanding, and science would not be possible without it. But the status of mathematics as a source of precise and infallible truth about reality is less clear.

For over 2000 years, the geometrical conclusions of the Greek mathematician Euclid were regarded as the most certain type of knowledge that could be obtained. Beginning with a small number of axioms, Euclid developed a system of geometry that was astonishing in breadth. The conclusions of Euclid’s geometry were regarded as absolutely certain, being derived from axioms that were “self-evident.”  Indeed, if one begins with “self-evident” truths and derives conclusions from those truths in a logical and verifiable manner, then one’s conclusions must also be undoubtedly true.

However, in the nineteenth century, these truths were undermined by the discovery of new geometries based on different axioms — the so-called “non-Euclidean geometries.” The conclusions of geometry were no longer absolute, but relative to the axioms that one chose. This became something of a problem for the concept of mathematical “proof.” If one can build different systems of mathematics based on different axioms, then “proof” only means that one’s conclusions are derivable from one’s axioms, not that one’s conclusions are absolutely true.

If you peruse the literature of mathematics on the definition of “axiom,” you will see what I mean. Many authors include the traditional definition of an axiom as a “self-evident truth.” But others define an axiom as a “definition” or “assumption,” seemingly as an acceptable alternative to “self-evident truth.” Surely there is a big difference between an “assumption,” a “self-evident truth,” and a “definition,” no? This confusing medley of definitions of “axiom” is the result of the nineteenth century discovery of non-Euclidean geometries. The issue has not been fully cleared up by mathematicians, but the Wikipedia entry on “axiom” probably represents the consensus of most mathematicians, when it states: “No explicit view regarding the absolute truth of axioms is ever taken in the context of modern mathematics, as such a thing is considered to be irrelevant.”  (!)

In reaction to the new uncertainty, mathematicians responded by searching for new foundations for mathematics, in the hopes of finding a set of axioms that would establish once and for all the certainty of mathematics. The “Foundations of Mathematics” movement, as it came to be called, ultimately failed. One of the leaders of the foundations movement, the great mathematician Bertrand Russell, declared late in life:

I wanted certainty in the kind of way in which people want religious faith. I thought that certainty is more likely to be found in mathematics than elsewhere. But I discovered that many mathematical demonstrations, which my teachers expected me to accept, were full of fallacies, and that, if certainty were indeed discoverable in mathematics, it would be in a new kind of mathematics, with more solid foundations than those that had hitherto been thought secure. But as the work proceeded, I was continually reminded of the fable about the elephant and the tortoise. Having constructed an elephant upon which the mathematical world could rest, I found the elephant tottering, and proceeded to construct a tortoise to keep the elephant from falling. But the tortoise was no more secure than the elephant, and after some twenty years of arduous toil, I came to the conclusion that there was nothing more that I could do in the way of making mathematical knowledge indubitable. (The Autobiography of Bertrand Russell)

Today, there are a variety of mathematical systems based on a variety of assumptions, and no one yet has succeeded in reconciling all the systems into one, fundamental, true system of mathematics. In fact, you wouldn’t know it from high school math, but some topics in mathematics have led to sharp divisions and debates among mathematicians. And most of these debates have never really been resolved — mathematicians have simply grown to tolerate the existence of different mathematical systems in the same way that ancient pagans accepted the existence of multiple gods.

Some of the most contentious issues in mathematics have revolved around the concept of infinity. In the nineteenth century, the mathematician Georg Cantor developed a theory about different sizes of infinite sets, but his arguments immediately attracted criticism from fellow mathematicians and remain controversial to this day. The central problem is that measuring infinity, assigning a quantity to infinity, is inherently an endless process. Once you think you have measured infinity, you simply add a one to it, and you have something greater than infinity — which means your original infinity was not truly infinite. Henri Poincare, one of the greatest mathematicians in history, rejected Cantor’s theory, noting: “Actual infinity does not exist. What we call infinite is only the endless possibility of creating new objects no matter how many exist already.”  Stephen Simpson, a mathematician at Pennsylvania University likewise argues “What truly infinite objects exist in the real world?” Objections to Cantor’s theory of infinity led to the emergence of new mathematical schools of thought such as finitism and intuitionism, which rejected the legitimacy of infinite mathematical objects.

Cantor focused his mental energies on concepts of the infinitely large, but another idea in mathematics was also controversial — that of the infinitely small, the “infinitesimal.” To give you an idea of how controversial the infinitesimal has been, I note that Cantor himself rejected the existence of infinitesimals! In Cantor’s view, the concept of something being infinitely small was inherently contradictory — if something is small, then it is inherently finite! And yet, infinitesimals have been used by mathematicians for hundreds of years. The infinitesimal was used by Leibniz in his version of calculus, and it is used today in the field of mathematics known as “non-standard analysis.” There is still no consensus among mathematicians today about the existence or legitimacy of infinitesimals, but infinitesimals, like imaginary numbers, seem to be useful in calculations, and as long as it works, mathematicians are willing to tolerate them, albeit not without some criticism.

The existence of different types of mathematical systems leads to some strange and contradictory answers to some of the simplest questions in mathematics. In school, you were probably taught that parallel lines never meet. That is true in Euclidean geometry, but not in hyperbolic geometry. In projective geometry, parallel lines meet at infinity!

Or consider the infinite decimal 0.9999 . . .  Is this infinite decimal equal to 1? The common sense answer that students usually give is “of course not.” But most mathematicians argue that both numbers are equivalent! Their logic is as follows: in the system of “real numbers,” there is no number between 0.999. . . and 1. Therefore, if you subtract 0.999. . .  from 1, the result is zero. And that means both numbers are the same!

However, in the system of numbers known as “hyperreals,” a system which includes infinitesimals, there exists an infinitesimal number between 0.999. . .  and 1. So under this system, 0.999. . .  and 1 are NOT the same! (A great explanation of this paradox is here.) So which system of numbers is the correct one? There is no consensus among mathematicians. But there is a great joke:

How many mathematicians does it take to screw in a light bulb?

0.999 . . .

The invention of computers has led to the creation of a new system of mathematics known as “floating point arithmetic.” This was necessary because, for all of their amazing capabilities, computers do not have enough memory or processing capability to precisely deal with all of the real numbers. To truly depict an infinite decimal, a computer would need an infinite amount of memory. So floating point arithmetic deals with this problem by using a degree of approximation.

One of the odd characteristics of the standard version of floating point arithmetic is that there is not one zero, but two zeros: a positive zero and a negative zero. What’s that you say? There’s no such thing as positive zero and negative zero? Well, not in the number system you were taught, but these numbers do exist in floating point arithmetic. And you can use them to divide by zero, which is something else I bet you thought you couldn’t do.  One divided by positive zero equals positive infinity, while one divided by negative zero equals negative infinity!

What the history of mathematics indicates is that the world is not converging toward one, true system of mathematics, but creating multiple, incompatible systems of mathematics, each of which has its own logic. If you think of mathematics as a set of tools for understanding reality, rather than reality itself, this makes sense. You want a variety of tools to do different things. Sometimes you need a hammer, sometimes you need a socket wrench, sometimes you need a Phillips screwdriver, etc. The only true test of a tool is how useful it is — a single tool that tried to do everything would be unhelpful.

You probably didn’t know about most of the issues in mathematics I have just mentioned, because they are usually not taught, either at the elementary school level, the high school level, or even college. Mathematics education consists largely of being taught the right way to perform a calculation, and then doing a variety of these calculations over and over and over. . . .

But why is that? Why is mathematics education just about learning to calculate, and not discussing controversies? I can think of several reasons.

One reason may be that most people who go into mathematics tend to have a desire for greater certainty. They don’t like uncertainty and imprecise answers, so they learn math, avoid mathematical controversies or ignore them, and then teach students a mathematics without uncertainty. I recall my college mathematics instructor declaring to class one day that she went into mathematics precisely because it offered sure answers. My teacher certainly had that much in common with Bertrand Russell (quoted above).

Another reason surely is that there is a large element of indoctrination in education generally, and airing mathematical controversies among students might have the effect of undermining authority. It is true that students can discuss controversies in the social sciences and humanities, but that’s because we live in a democratic society in which there are a variety of views on social issues, and no one group has the power to impose a single view on the classroom. But even a democratic society is not interested in teaching controversies in mathematics — it’s interested in creating good workers for the economy. We need people who can make change, draw up a budget, and measure things, not people who challenge widely-accepted beliefs.

This utilitarian view of mathematics education seems to be universal, shared by democratic and totalitarian governments alike. Forcing students to perform endless calculations without allowing them to ask “why” is a great way to bore children and make them hate math, but at least they’ll be obedient citizens.

What is “Mythos” and “Logos”?

The terms “mythos” and “logos” are used to describe the transition in ancient Greek thought from the stories of gods, goddesses, and heroes (mythos) to the gradual development of rational philosophy and logic (logos). The former is represented by the earliest Greek thinkers, such as Hesiod and Homer; the latter is represented by later thinkers called the “pre-Socratic philosophers” and then Socrates, Plato, and Aristotle. (See the book: From Myth to Reason? Studies in the Development of Greek Thought).

In the earliest, “mythos” stage of development, the Greeks saw events of the world as being caused by a multitude of clashing personalities — the “gods.” There were gods for natural phenomena such as the sun, the sea, thunder and lightening, and gods for human activities such as winemaking, war, and love. The primary mode of explanation of reality consisted of highly imaginative stories about these personalities. However, as time went on, Greek thinkers became critical of the old myths and proposed alternative explanations of natural phenomena based on observation and logical deduction. Under “logos,” the highly personalized worldview of the Greeks became transformed into one in which natural phenomena were explained not by invisible superhuman persons, but by impersonal natural causes.

However, many scholars argue that there was not such a sharp distinction between mythos and logos historically, that logos grew out of mythos, and elements of mythos remain with us today.

For example, ancient myths provided the first basic concepts used subsequently to develop theories of the origins of the universe. We take for granted the words that we use every day, but the vast majority of human beings never invent a single word or original concept in their lives — they learn these things from their culture, which is the end-product of thousands of years of speaking and writing by millions of people long-dead. The very first concepts of “cosmos,” “beginning,” nothingness,” and differentiation from a single substance — these were not present in human culture for all time, but originated in ancient myths. Subsequent philosophers borrowed these concepts from the myths, while discarding the overly-personalistic interpretations of the origins of the universe. In that sense, mythos provided the scaffolding for the growth of philosophy and modern science. (See Walter Burkert, “The Logic of Cosmogony” in From Myth to Reason: Studies in the Development of Greek Thought.)

An additional issue is the fact that not all myths are wholly false. Many myths are stories that communicate truths even if the characters and events in the story are fictional. Socrates and Plato denounced many of the early myths of the Greeks, but they also illustrated philosophical points with stories that were meant to serve as analogies or metaphors. Plato’s allegory of the cave, for example, is meant to illustrate the ability of the educated human to perceive the true reality behind surface impressions. Could Plato have made the same philosophical point in a literal language, without using any stories or analogies? Possibly, but the impact would be less, and it is possible that the point would not be effectively communicated at all.

Some of the truths that myths communicate are about human values, and these values can be true even if the stories in which the values are embedded are false. Ancient Greek religion contained many preposterous stories, and the notion of personal divine beings directing natural phenomena and intervening in human affairs was false. But when the Greeks built temples and offered sacrifices, they were not just worshiping personalities — they were worshiping the values that the gods represented. Apollo was the god of light, knowledge, and healing; Hera was the goddess of marriage and family; Aphrodite was the goddess of love; Athena was the goddess of wisdom; and Zeus, the king of the gods, upheld order and justice. There’s no evidence at all that these personalities existed or that sacrifices to these personalities would advance the values they represented. But a basic respect for and worshipful disposition toward the values the gods represented was part of the foundation of ancient Greek civilization. I don’t think it was a coincidence that the city of Athens, whose patron goddess was Athena, went on to produce some of the greatest philosophers the world has seen — love of wisdom is the prerequisite for knowledge, and that love of wisdom grew out of the culture of Athens. (The ancient Greek word philosophia literally means “love of wisdom.”)

It is also worth pointing out that worship of the gods, for all of its superstitious aspects, was not incompatible with even the growth of scientific knowledge. Modern western medicine originated in the healing temples devoted to the god Asclepius, the son of Apollo, and the god of medicine. Both of the great ancient physicians Hippocrates and Galen are reported to have begun their careers as physicians in the temples of Asclepius, the first hospitals. Hippocrates is widely regarded as the father of western medicine and Galen is considered the most accomplished medical researcher of the ancient world. As love of wisdom was the prerequisite for philosophy, reverence for healing was the prerequisite for the development of medicine.

Karen Armstrong has written that ancient myths were never meant to be taken literally, but were “metaphorical attempts to describe a reality that was too complex and elusive to express in any other way.” (A History of God) I am not sure that’s completely accurate. I think it most likely that the mass of humanity believed in the literal truth of the myths, while educated human beings understood the gods to be metaphorical representations of the good that existed in nature and humanity. Some would argue that this use of metaphors to describe reality is deceptive and unnecessary. But a literal understanding of reality is not always possible, and metaphors are widely used even by scientists.

Theodore L. Brown, a professor emeritus of chemistry at the University of Illinois at Urbana-Champaign, has provided numerous examples of scientific metaphors in his book, Making Truth: Metaphor in Science. According to Brown, the history of the human understanding of the atom, which cannot be directly seen, began with a simple metaphor of atoms as billiard balls; later, scientists compared atoms to plum pudding; then they compared the atom to our solar system, with electrons “orbiting” around a nucleus. There has been a gradual improvement in our models of the atom over time, but ultimately, there is no single, correct literal representation of the atom. Each model illustrates an aspect or aspects of atomic behavior — no one model can capture all aspects accurately. Even the notion of atoms as particles is not fully accurate, because atoms can behave like waves, without a precise position in space as we normally think of particles as having. The same principle applies to models of the molecule as well. (Brown, chapters, 4-6)  A number of scientists have compared the imaginative construction of scientific models to map-making — there is no single, fully accurate way to map the earth (using a flat surface to depict a sphere), so we are forced to use a variety of maps at different scales and projections, depending on our needs.

Sometimes the visual models that scientists create are quite unrealistic. The model of the “energy landscape” was created by biologists in order to understand the process of protein folding — the basic idea was to imagine a ball rolling on a surface pitted with holes and valleys of varying depth. As the ball would tend to seek out the low points on the landscape (due to gravity), proteins would tend to seek the lowest possible free energy state. All biologists know the energy landscape model is a metaphor — in reality, proteins don’t actually go rolling down hills! But the model is useful for understanding a process that is highly complex and cannot be directly seen.

What is particularly interesting is that some of the metaphorical models of science are frankly anthropomorphic — they are based on qualities or phenomena found in persons or personal institutions. Scientists envision cells as “factories” that accept inputs and produce goods. The genetic structure of DNA is described as having a “code” or “language.” The term “chaperone proteins” was invented to describe proteins that have the job of assisting other proteins to fold correctly; proteins that don’t fold correctly are either treated or dismantled so that they do not cause damage to the larger organism — a process that has been given a medical metaphor: “protein triage.” (Brown, chapters 7-8) Even referring to the “laws of physics” is to use a metaphorical comparison to human law. So even as logos has triumphed over the mythos conception that divine personalities rule natural phenomena, qualities associated with personal beings have continued to sneak into modern scientific models.

The transition of a mythos-dominated worldview to a logos-dominated worldview was a stupendous achievement of the ancient Greeks, and modern philosophy, science, and civilization would not be possible without it. But the transition did not involve a complete replacement of one worldview with another, but rather the building of additional useful structures on top of a simple foundation. Logos grew out of its origins in mythos, and retains elements of mythos to this day. The compatibilities and conflicts between these two modes of thought are the thematic basis of this website.

Related: A Defense of the Ancient Greek Pagan Religion

Einstein’s Judeo-Quaker Pantheism

I recently came across a fascinating website, Einstein: Science and Religion, which I hope you will find time to peruse.  The website, edited by Arnold Lesikar, Professor Emeritus in the  Department of Physics, Astronomy, and Engineering Science at St. Cloud State University in Minnesota, contains a collection of Einstein’s various comments on religion, God, and the relationship between science and religion.

Einstein’s views on religion have been frequently publicized and commented on, but it is difficult to get an accurate and comprehensive assessment of Einstein’s actual views on religion because of the tendency of both believers and atheists to cherry-pick particular quotations or to quote out of context. Einstein’s actual views on religion are complex and multifaceted, and one is apt to get the wrong impression by focusing on just one or several of Einstein’s comments.

One should begin by noting that Einstein did not accept the notion of a personal God, an omnipotent superbeing who listens to our prayers and intervenes in the operations of the laws of the universe. Einstein repeatedly rejected this notion of God throughout his life, from his adolescence to old age. He also believed that many, if not most, of the stories in the Bible were untrue.

The God Einstein did believe in was the God of the philosopher Spinoza. Spinoza conceived of God as being nothing more than the natural order underlying this universe — this order was fundamentally an intelligent order, but it was a mistake to conceive of God as having a personality or caring about man. Spinoza’s view was known as pantheism, and Einstein explicitly stated that he was a proponent of Spinoza and of pantheism. Einstein also argued that ethical systems were a purely human concern, with no superhuman authority figure behind them, and there was no afterlife in which humans could be rewarded or punished. In fact, Einstein believed that immortality was undesirable anyway. Finally, Einstein sometimes expressed derogatory views of religious institutions and leaders, believing them responsible for superstition and bigotry among the masses.

However, it should also be noted that Einstein’s skepticism and love of truth was too deep to result in a rigid and dogmatic atheism. Einstein described himself variously as an agnostic or pantheist and disliked the arrogant certainty of atheists. He even refused to definitively reject the idea of a personal God, believing that there were too many mysteries behind the universe to come to any final conclusions about God. He also wrote that he did not want to destroy the idea of a personal God in the minds of the masses, because even a primitive metaphysics was better than no metaphysics at all.

Even while rejecting the notion of a personal God, Einstein described God as a spirit, a spirit with the attribute of thought or intelligence: “[E]very one who is seriously involved in the pursuit of science becomes convinced that a spirit is manifest in the laws of the Universe — a spirit vastly superior to that of man, and one in the face of which we with our modest powers must feel humble.” In an interview, Einstein expressed a similar view:

If there is any such concept as a God, it is a subtle spirit, not an image of a man that so many have fixed in their minds. In essence, my religion consists of a humble admiration for this illimitable superior spirit that reveals itself in the slight details that we are able to perceive with our frail and feeble minds.

Distinguishing between the religious feeling of the “naïve man” and the religious feeling of the scientist, Einstein argued:  “[The scientist’s] religious feeling takes the form of a rapturous amazement at the harmony of natural law, which reveals an intelligence of such superiority that, compared with it, all the systematic thinking and acting of human beings is an utterly insignificant reflection.”

While skeptical and often critical of religious institutions, Einstein also believed that religion played a valuable and necessary role for civilization in creating “superpersonal goals” for human beings, goals above and beyond self-interest, that could not be established by pure reason.  Reason could provide us with the facts of existence, said Einstein, but the question of how we should live our lives necessarily required going beyond reason. According to Einstein:

[T]he scientific method can teach us nothing else beyond how facts are related to, and conditioned by, each other.The aspiration toward such objective knowledge belongs to the highest of which man is capabIe, and you will certainly not suspect me of wishing to belittle the achievements and the heroic efforts of man in this sphere. Yet it is equally clear that knowledge of what is does not open the door directly to what should be. . . . Objective knowledge provides us with powerful instruments for the achievements of certain ends, but the ultimate goal itself and the longing to reach it must come from another source. . . .

To make clear these fundamental ends and valuations, and to set them fast in the emotional life of the individual, seems to me precisely the most important function which religion has to perform in the social life of man. And if one asks whence derives the authority of such fundamental ends, since they cannot be stated and justified merely by reason, one can only answer: they exist in a healthy society as powerful traditions, which act upon the conduct and aspirations and judgments of the individuals; they are there, that is, as something living, without its being necessary to find justification for their existence. They come into being not through demonstration but through revelation, through the medium of powerful personalities. One must not attempt to justify them, but rather to sense their nature simply and clearly.

Einstein even argued that the establishment of moral goals by religious prophets was one of the most important accomplishments of humanity, eclipsing even scientific accomplishment:

Our time is distinguished by wonderful achievements in the fields of scientific understanding and the technical application of those insights. Who would not be cheered by this? But let us not forget that knowledge and skills alone cannot lead humanity to a happy and dignified life. Humanity has every reason to place the proclaimers of high moral standards and values above the discoverers of objective truth. What humanity owes to personalities like Buddha, Moses, and Jesus ranks for me higher than all the achievements of the enquiring and constructive mind.

Einstein’s views of Jesus are particularly intriguing. Einstein never rejected his Jewish identity and refused all attempts by others to convert him to Christianity. Einstein also refused to believe the stories of Jesus’s alleged supernatural powers. But Einstein also believed the historical existence of Jesus was a fact, and Einstein regarded Jesus as one the greatest — if not the greatest — of religious prophets:

As a child, I received instruction both in the Bible and in the Talmud. I am a Jew, but I am enthralled by the luminous figure of the Nazarene. . . . No one can read the Gospels without feeling the actual presence of Jesus. His personality pulsates in every word. No myth is filled with such life. How different, for instance, is the impression which we receive from an account of legendary heroes of antiquity like Theseus. Theseus and other heroes of his type lack the authentic vitality of Jesus. . . .No man can deny the fact that Jesus existed, nor that his sayings are beautiful. Even if some them have been said before, no one has expressed them so divinely as he.

Toward the end of his life, Einstein, while remaining Jewish, expressed great admiration for the Christian sect known as the Quakers. Einstein stated that the “Society of Friends,” as the Quakers referred to themselves as, had the “highest moral standards” and their influence was “very beneficial.” Einstein even declared “If I were not a Jew I would be a Quaker.”

Now Einstein’s various pronouncements on religion are scattered in multiple sources, so it is not surprising that people may get the wrong impression from examining just a few quotes. Sometimes stories of Einstein’s religious views are simply made up, implying that Einstein was a traditional believer. Other times, atheists will emphasize Einstein’s rejection of a personal God, while completely overlooking Einstein’s views on the limits of reason, the necessity of religion in providing superpersonal goals, and the value of the religious prophets.

For some people, a religion without a personal God is not a true religion. But historically, a number of major religions do not hold belief in a personal God as central to their belief system, including Taoism, Buddhism, and Confucianism. In addition, many theologians in monotheistic faiths describe God in impersonal terms, or stress that the attributes of God may be represented symbolically as personal, but that God himself cannot be adequately described as a person. The great Jewish theologian Maimonides argued that although God had been described allegorically and imperfectly by the prophets as having the attributes of a personal being, God did not actually have human thoughts and emotions. The twentieth century Christian theologian Paul Tillich argued that God was not “a being” but the “Ground of Being” or the “Power of Being” existing in all things.

However, it is somewhat odd is that while rejecting the notion of a personal God, Einstein saw God as a spirit that seemingly possessed an intelligence far greater than that of human beings. In that, Einstein was similar to Spinoza, who believed God had the attribute of “thought” and that the human mind was but part of the “infinite intellect of God.”  But is not intelligence a quality of personal beings? In everyday life, we don’t think of orbiting planets or stars or rocks or water as possessing intelligence, and even if we attribute intelligence to lower forms of life such as bacteria and plants, we recognize that this sort of intelligence is primitive. If you ask people what concrete, existing things best possess the quality of intelligence, they will point to humans — personal beings! Yet, both Spinoza and Einstein attribute vast, or even infinite, intelligence to God, while denying that God is a personal being!

I am not arguing that Spinoza and Einstein were wrong or somehow deluding themselves when they argued that God was not a personal being. I am simply pointing out how difficult it is to adequately and accurately describe God. I think Spinoza and Einstein were correct in seeking to modify the traditional concept of God as a type of omnipotent superperson with human thoughts and emotions. But at the same time, it can be difficult to describe God in a way that does not use attributes that are commonly thought of as belonging to personal beings. At best, we can use analogies from everyday experience to indirectly describe God, while acknowledging that all analogies fall short.