What is “Transcendence”?

You may have noticed number of writings on religious topics that make reference to “transcendence” or “the transcendent.” However, the word “transcendence” is usually not very well defined, if it is defined at all. The Catechism of the Catholic Church makes several references to transcendence, but it’s not completely clear what transcendence means other than the infinite greatness of God, and the fact that God is “the inexpressible, the incomprehensible, the invisible, the ungraspable.” For those who value reason and precise arguments, this vagueness is unsatisfying. Astonishingly, the fifteen volume Catholic Encyclopedia (1907-1914) did not even have an entry on “transcendence,” though it did have an entry on “transcendentalism,” a largely secular philosophy with a variety of schools and meanings. (The New Catholic Encyclopedia in 1967 finally did have an entry on “transcendence.”)

The Oxford English Dictionary defines “transcendence” as “the action or fact of transcending, surmounting, or rising above . . . ; excelling, surpassing; also the condition or quality of being transcendent, surpassing eminence or excellence. . . .” The reference to “excellence” is probably key to understanding what “transcendence” is. In my previous essay on ancient Greek religion, I pointed out that areté, the Greek word for “excellence,” was a central idea of Greek culture and one cannot fully appreciate the ancient Greek pagan religion without recognizing that Greek devotion to excellence was central to their religion. The Greeks depicted their gods as human, but with perfect physical forms. And while the behavior of the Greek gods was often dubious from a moral standpoint, the Greek gods were still regarded as the givers of wisdom, order, justice, love, and all the institutions of human civilization.

The odd thing about transcendence is that because it seems to refer to a striving for an ideal or a goal that goes above and beyond an observed reality, transcendence has something of an unreal quality. It is easy to see that rocks and plants and stars and animals and humans exist. But the transcendent cannot be directly seen, and one cannot prove the transcendent exists. It is always beyond our reach.

Theologians refer to transcendence as one of the two natures of God, the other being “immanence.” Transcendence refers to the higher nature of God and immanence refers to God as He currently works in reality, i.e., the cosmic order. The division between those who believe in a personal God and those who believe in an impersonal God reflects the division between the transcendent and immanent view of God. It is no surprise that most scientists who believe in God tend more to the view of an impersonal God, because their whole life is dedicated to examining the reality of the cosmic order, which seems to operate according to a set of rules rather than personal supervision.

Of course, atheists don’t even believe in an impersonal God. One famous atheist, Sigmund Freud, argued that religion was an illusion, a simple exercise in “wish fulfillment.” According to Freud, human beings desired love, immortality, and an end to suffering and pain, so they gravitated to religion as a solution to the inevitable problems and limitations of mortal life. Marxists have a similar view of religion, seeing promises of an afterlife as a barrier to improving actual human life.

Another view was taken by the American philosopher George Santayana, whose book, Reason in Religion, is one of the very finest books ever written on the subject of religion. According to Santayana, religion was an imaginative and poetic interpretation of life; religion supplied ideal ends to which human beings could orient their lives. Religion failed only when it attributed literal truth to these imaginative ideal ends. Thus religions should be judged, according to Santayana, according to whether they were good or bad, not whether they were true or false.

This criteria for judging religion would appear to be irrational, both to rationalists and to those who cling to faith. People tend to equate worship of God with belief in God, and often see literalists and fundamentalists as the most devoted of all. But I would argue that worship is the act of submission to ideal ends, which hold value precisely because they are higher than actually existing things, and therefore cannot pass traditional tests of truth, which call for a correspondence to reality.

In essence, worship is submission to a transcendent Good. We see good in our lives all the time, but we know that the particular goods we experience are partial and perishable. Freud is right that we wish for goods that cannot be acquired completely in our lives and that we use our imaginations to project perfect and eternal goods, i.e. God and heaven. But isn’t it precisely these ideal ends that are sacred, not the flawed, perishable things that we see all around us? In the words of Santayana,

[I]n close association with superstition and fable we find piety and spirituality entering the world. Rational religion has these two phases: piety, or loyalty to necessary conditions, and spirituality, or devotion to ideal ends. These simple sanctities make the core of all the others. Piety drinks at the deep, elemental sources of power and order: it studies nature, honours the past, appropriates and continues its mission. Spirituality uses the strength thus acquired, remodeling all it receives, and looking to the future and the ideal. (Reason in Religion, Chapter XV)

People misunderstand ancient Greek religion when they think it is merely a set of stories about invisible personalities who fly around controlling nature and intervening in human affairs. Many Greek myths were understood to be poetic creations, not history; there were often multiple variations of each myth, and people felt free to modify the stories over time, create new gods and goddesses, and change the functions/responsibilities of each god. Rational consistency was not expected, and depictions of the appearance of any god or goddess in statues or painting could vary widely. For the Greeks, the gods were not just personalities, but transcendent forms of the Good. This is why Greek religion also worshipped idealized ends and virtues such as “Peace,” “Victory,” “Love,” “Democracy,” “Health,” “Order,” and “Wealth.” The Greeks represented these idealized ends and virtues as persons (usually females) in statues, built temples for them, and composed worshipful hymns to them. In fact, the tendency of the Greeks to depict any desired end or virtue as a person was so prevalent, it is sometimes difficult for historians to tell if a particular statue or temple was meant for an actual goddess/god or was a personified symbol. For the ancient Greeks, the distinction may not have been that important, for they tended to think in highly poetic and metaphorical terms.

This may be fine as an interpretation of religion, you may say, but does it make sense to conceive of imaginative transcendent forms as persons or spirits who can actually bring about the goods and virtues that we seek? Is there any reason to think that prayer to Athena will make us wise, that singing a hymn to Zeus will help us win a war, or that a sacrifice at the temples of “Peace” or “Health” will bring us peace or health? If these gods are not powerful persons or spirits that can hear our prayers or observe our sacrifices, but merely poetic representations or symbols, then what good are they and what good is worship?

My view is this: worship and prayer do not affect natural causation. Storms, earthquakes, disease, and all the other calamities that have afflicted humankind from the beginning are not affected by prayer. Addressing these calamities requires research into natural causation, planning, human intervention, and technology. What worship and prayer can do, if they are directed at the proper ends, is help us transcend ourselves, make ourselves better people, and thereby make our societies better.

In a previous essay, I reviewed the works of various physicists, who concluded that reality consists not of tiny, solid objects but rather bundles of properties and qualities that emerge from potentiality to actuality. I think this dynamic view of reality is what we need in order to understand the relationship between the transcendent and the actual. We worship the transcendent not because we can prove it exists, but because the transcendent is always drawing us to a higher life, one that excels or supersedes who we already are. The pantheism of Spinoza and Einstein is more rational than traditional myths that attributed natural events to a personal God who created the world in six days and subsequently punished evil by causing natural disasters. But pantheism is ultimately a poor basis for religion. What would be the point of worshipping the law of gravity or electromagnetism or the elements in the periodic table? These foundational parts of the universe are impressive, but I would argue that aspiring to something higher is fundamental not only to human nature but to the universe itself. The universe, after all, began simply with a concentrated point of energy; then space expanded and a few elements such as hydrogen and helium formed; only after hundreds of millions of years did the first stars, planets, and other elements necessary for life began to emerge.

Worshipping the transcendent orients the self to a higher good, out of the immediate here-and-now. And done properly, worship results in worthy accomplishments that improve life. We tend to think of human civilization as being based on the rational mastery of a body of knowledge. But all knowledge began with an imagined transcendent good. The very first lawgivers had no body of laws to study; the first ethicists had no texts on morals to consult; the first architects had no previous designs to emulate; the first mathematicians had no symbols to calculate with; the first musicians had no composers to study. All our knowledge and civilization began with an imagined transcendent good. This inspired experimentation with primitive forms; and then improvement on those initial primitive efforts. Only much later, after many centuries, did the fields of law, ethics, architecture, mathematics, and music become a body of knowledge requiring years of study. So we attribute these accomplishments to reason, forgetting the imaginative leaps that first spurred these fields.

 

Scientific Revolutions and Relativism

Recently, Facebook CEO Mark Zuckerberg chose Thomas Kuhn’s classic The Structure of Scientific Revolutions for his book discussion group. And although I don’t usually try to update this blog with the most recent controversy of the day, this time I can’t resist jumping on the Internet bandwagon and delving into this difficult, challenging book.

To briefly summarize, Kuhn disputes the traditional notion of science as one of cumulative growth, in which Galileo and Kepler build upon Copernicus, Newton builds upon Galileo and Kepler, and Einstein builds upon Newton. This picture of cumulative growth may be accurate for periods of “normal science,” Kuhn writes, when the community of scientists are working from the same general picture of the universe. But there are periods when the common picture of the universe (which Kuhn refers to as a “paradigm”) undergoes a revolutionary change. A radically new picture of the universe emerges in the community of scientists, old words and concepts obtain new meanings, and scientific consensus is challenged by conflict between traditionalists and adherents of the new paradigm. If the new paradigm is generally successful in solving new puzzles AND solving older puzzles that the previous paradigm solved, the community of scientists gradually moves to accept the new paradigm — though this often requires that stubborn traditionalists eventually die off.

According to Kuhn, science as a whole progressed cumulatively in the sense that science became better and better at solving puzzles and predicting things, such as the motions of the planets and stars. But the notion that scientific progress was bringing us closer and closer to the Truth, was in Kuhn’s view highly problematic. He felt there was no theory-independent way of saying what was really “out there” — conceptions of reality were inextricably linked to the human mind and its methods of perceiving, selecting, and organizing information. Rather than seeing science as evolving closer and closer to an ultimate goal, Kuhn made an analogy to biological evolution, noting that life evolves into higher forms, but there is no evidence of a final goal toward which life is heading. According to Kuhn,

I do not doubt, for example, that Newton’s mechanics improves on Aristotle’s and that Einstein’s improves on Newton’s as instruments for puzzle-solving. But I can see in their succession no coherent direction of ontological development. On the contrary, in some important respects, though by no means all, Einstein’s general theory of relativity is closer to Aristotle’s than either of them is to Newton’s. (Structure of Scientific Revolutions, postscript, pp. 206-7.)

This claim has bothered many. In the view of Kuhn’s critics, if a theory solves more puzzles, predicts more phenomena to a greater degree of accuracy, the theory must be a more accurate picture of reality, bringing us closer and closer to the Truth. This is a “common sense” conclusion that would seem to be irrefutable. One writer in Scientific American comments on Kuhn’s appeal to “relativists,” and argues:

Kuhn’s insight forced him to take the untenable position that because all scientific theories fall short of absolute, mystical truth, they are all equally untrue. Because we cannot discover The Answer, we cannot find any answers. His mysticism led him to a position as absurd as that of the literary sophists who argue that all texts — from The Tempest to an ad for a new brand of vodka — are equally meaningless, or meaningful. (“What Thomas Kuhn Really Thought About Scientific ‘Truth’“)

Many others have also charged Kuhn with relativism, so it is important to take some time to examine this charge.

What people seem to have a hard time grasping is what scientific theories actually accomplish. Scientific theories or models can in fact be very good at solving puzzles or predicting outcomes without being an accurate reflection of reality — in fact, in many cases theories have to be unrealistic in order to be useful! Why? A theory must accomplish several goals, but some of these goals are incompatible, requiring a tradeoff of values. For example, the best theories generalize as much as possible, but since there are exceptions to almost every generalization, there is a tradeoff between generalizability and accuracy. As Nancy Cartwright and Ronald Giere have pointed out, the “laws of physics” have many exceptions when matched to actual phenomena; but we cherish the laws of physics because of their wide scope: they subsume millions of observations under a small number of general principles, even though specific cases usually don’t exactly match the predictions of any one law.

There is also a tradeoff between accuracy and simplicity. Complete accuracy in many cases may require dozens of complex calculations; but most of the time, complete accuracy is not required, so scientists go with the simplest possible principles and calculations. For example, when dealing with gravity, Newton’s theory is much simpler than Einstein’s, so scientists use Newton’s equations until circumstances require them to use Einstein’s equations. (For more on theoretical flexibility, see this post.)

Finally, there is a tradeoff between explanation and prediction. Many people assume that explanation and prediction are two sides of the same coin, but in fact it is not only possible to predict outcomes without having a good causal model, sometimes focusing on causation gets in the way of developing a good predictive model. Why? Sometimes it’s difficult to observe or measure causal variables, so you build your model using variables that are observable and measurable even if those variables are merely associated with certain outcomes and may not cause those outcomes. To choose a very simple example, a model that posits that a rooster crowing leads to the rising of the sun can be a very good predictive model while saying nothing about causation. And there are actually many examples of this in contemporary scientific practice. Scientists working for the Netflix corporation on improving the prediction of customers’ movie preferences have built a highly valuable predictive model using associations between certain data points, even though they don’t have a true causal model. (See Galit Shmueli, “To Explain or Predict” in Statistical Science, 2010, vol. 25, no. 3)

Not only is there no single, correct way to make these value tradeoffs, it is often the case that one can end up with multiple, incompatible theories that deal with the same phenomena, and there is no obvious choice as to which theory is best. As Kuhn has pointed out, new theories become widely accepted among the community of scientists only when the new theory can account for anomalies in the old theory AND yet also conserve at least most of the predictions of the old theory. Even so, it is not long before even newer theories come along that also seem to account for the same phenomena equally well. Is it relativism to recognize this fact? Not really. Does the reality of multiple, incompatible theories mean that every person’s opinion is equally valid? No. There are still firm standards in science. But there can be more than one answer to a problem. The square root of 1,000,000 can be 1000 or -1000. That doesn’t mean that any answer to the square root of 1,000,000 is valid!

Physicist Stephen Hawking and philosopher Ronald Giere have made the analogy between scientific theories and maps. A map is an attempt to reduce a very large, approximately spherical, three dimensional object — the earth — to a flat surface. There is no single correct way to make a map, and all maps involve some level of inaccuracy and distortion. If you want accurate distances, the areas of the land masses will be inaccurate, and vice versa. With a small scale, you can depict large areas but lose detail. If you want to depict great detail, you will have to make a map with a larger scale. If you want to depict all geographic features, your map may become so cluttered with detail it is not useful, so you have to choose which details are important — roads, rivers, trees, buildings, elevation, agricultural areas, etc. North can be “up” on your map, but it does not have to be. In fact, it’s possible to make an infinite number of valid maps, as long as they are useful for some purpose. That does not mean that anyone can make a good map, that there are no standards. Making good maps requires knowledge and great skill.

As I noted above, physicists tend to prefer Newton’s theory of gravity rather than Einstein’s to predict the motion of celestial objects because it is simpler. There’s nothing wrong with this, but it is worth pointing out that Einstein’s picture of gravity is completely different from Newton’s. In Newton’s view, space and time are separate, absolute entities, space is flat, and gravity is a force that pulls objects away from the straight lines that the law of inertia would normally make them follow. In Einstein’s view, space and time are combined into one entity, spacetime, space and time are relative, not absolute, spacetime is curved in the presence of mass, and when objects orbit a planet it is not because the force of gravity is overcoming inertia (gravity is in fact a “fictitious force“), but because objects are obeying the law of inertia by following the curved paths of spacetime! In terms of prediction, Einstein’s view of gravity offers an incremental improvement to Newton’s, but Einstein’s picture of gravity is so radically different, Kuhn was right in seeing Einstein’s theory as a revolution. But scientists continue to use Newton’s theory, because it mostly retains the value of prediction while excelling in the value of simplicity.

Stephen Hawking explains why science is not likely to progress to a single, “correct” picture of the universe:

[O]our brains interpret the input from our sensory organs by making a model of the world. When such a model is successful at explaining events, we tend to attribute to it, and the elements and concepts that constitute it, the quality of reality or absolute truth. But there may be different ways in which one could model the same physical situation, with each employing different fundamental elements and concepts. If two such physical theories or models accurately predict the same events, one cannot be said to be more real than the other; rather we are free to use whichever model is more convenient.  (The Grand Design, p. 7)

I don’t think this is “relativism,” but if people insist that it is relativism, it’s not Kuhn who is the guilty party. Kuhn is simply exposing what scientists do.

Uncertainty, Debate, and Imprecision in Mathematics

If you remember anything about the mathematics courses you took in high school, it is that mathematics is the one subject in which there is absolute certainty and precision in all its answers. Unlike history, social science, and the humanities, which offer a variety of interpretations of subject matter, mathematics is unified and absolute.  Two plus two equals four and that is that. If you answer a math problem wrong, there is no sense in arguing a different interpretation with the teacher. Even the “hard sciences,” such as physics, may revise long-established conclusions, as new evidence comes in and new theories are developed. But mathematical truths are seemingly forever. Or are they?

You might not know it, but there has been a revolution in the human understanding of mathematics in the past 150 years that has undermined the belief that mathematics holds the key to absolute truth about the nature of the universe. Even as mathematical knowledge has increased, uncertainty has also increased, and different types of mathematics have been created that have different premises and are incompatible with each other. The value of mathematics remains clear. Mathematics increases our understanding, and science would not be possible without it. But the status of mathematics as a source of precise and infallible truth about reality is less clear.

For over 2000 years, the geometrical conclusions of the Greek mathematician Euclid were regarded as the most certain type of knowledge that could be obtained. Beginning with a small number of axioms, Euclid developed a system of geometry that was astonishing in breadth. The conclusions of Euclid’s geometry were regarded as absolutely certain, being derived from axioms that were “self-evident.”  Indeed, if one begins with “self-evident” truths and derives conclusions from those truths in a logical and verifiable manner, then one’s conclusions must also be undoubtedly true.

However, in the nineteenth century, these truths were undermined by the discovery of new geometries based on different axioms — the so-called “non-Euclidean geometries.” The conclusions of geometry were no longer absolute, but relative to the axioms that one chose. This became something of a problem for the concept of mathematical “proof.” If one can build different systems of mathematics based on different axioms, then “proof” only means that one’s conclusions are derivable from one’s axioms, not that one’s conclusions are absolutely true.

If you peruse the literature of mathematics on the definition of “axiom,” you will see what I mean. Many authors include the traditional definition of an axiom as a “self-evident truth.” But others define an axiom as a “definition” or “assumption,” seemingly as an acceptable alternative to “self-evident truth.” Surely there is a big difference between an “assumption,” a “self-evident truth,” and a “definition,” no? This confusing medley of definitions of “axiom” is the result of the nineteenth century discovery of non-Euclidean geometries. The issue has not been fully cleared up by mathematicians, but the Wikipedia entry on “axiom” probably represents the consensus of most mathematicians, when it states: “No explicit view regarding the absolute truth of axioms is ever taken in the context of modern mathematics, as such a thing is considered to be irrelevant.”  (!)

In reaction to the new uncertainty, mathematicians responded by searching for new foundations for mathematics, in the hopes of finding a set of axioms that would establish once and for all the certainty of mathematics. The “Foundations of Mathematics” movement, as it came to be called, ultimately failed. One of the leaders of the foundations movement, the great mathematician Bertrand Russell, declared late in life:

I wanted certainty in the kind of way in which people want religious faith. I thought that certainty is more likely to be found in mathematics than elsewhere. But I discovered that many mathematical demonstrations, which my teachers expected me to accept, were full of fallacies, and that, if certainty were indeed discoverable in mathematics, it would be in a new kind of mathematics, with more solid foundations than those that had hitherto been thought secure. But as the work proceeded, I was continually reminded of the fable about the elephant and the tortoise. Having constructed an elephant upon which the mathematical world could rest, I found the elephant tottering, and proceeded to construct a tortoise to keep the elephant from falling. But the tortoise was no more secure than the elephant, and after some twenty years of arduous toil, I came to the conclusion that there was nothing more that I could do in the way of making mathematical knowledge indubitable. (The Autobiography of Bertrand Russell)

Today, there are a variety of mathematical systems based on a variety of assumptions, and no one yet has succeeded in reconciling all the systems into one, fundamental, true system of mathematics. In fact, you wouldn’t know it from high school math, but some topics in mathematics have led to sharp divisions and debates among mathematicians. And most of these debates have never really been resolved — mathematicians have simply grown to tolerate the existence of different mathematical systems in the same way that ancient pagans accepted the existence of multiple gods.

Some of the most contentious issues in mathematics have revolved around the concept of infinity. In the nineteenth century, the mathematician Georg Cantor developed a theory about different sizes of infinite sets, but his arguments immediately attracted criticism from fellow mathematicians and remain controversial to this day. The central problem is that measuring infinity, assigning a quantity to infinity, is inherently an endless process. Once you think you have measured infinity, you simply add a one to it, and you have something greater than infinity — which means your original infinity was not truly infinite. Henri Poincare, one of the greatest mathematicians in history, rejected Cantor’s theory, noting: “Actual infinity does not exist. What we call infinite is only the endless possibility of creating new objects no matter how many exist already.”  Stephen Simpson, a mathematician at Pennsylvania University likewise argues “What truly infinite objects exist in the real world?” Objections to Cantor’s theory of infinity led to the emergence of new mathematical schools of thought such as finitism and intuitionism, which rejected the legitimacy of infinite mathematical objects.

Cantor focused his mental energies on concepts of the infinitely large, but another idea in mathematics was also controversial — that of the infinitely small, the “infinitesimal.” To give you an idea of how controversial the infinitesimal has been, I note that Cantor himself rejected the existence of infinitesimals! In Cantor’s view, the concept of something being infinitely small was inherently contradictory — if something is small, then it is inherently finite! And yet, infinitesimals have been used by mathematicians for hundreds of years. The infinitesimal was used by Leibniz in his version of calculus, and it is used today in the field of mathematics known as “non-standard analysis.” There is still no consensus among mathematicians today about the existence or legitimacy of infinitesimals, but infinitesimals, like imaginary numbers, seem to be useful in calculations, and as long as it works, mathematicians are willing to tolerate them, albeit not without some criticism.

The existence of different types of mathematical systems leads to some strange and contradictory answers to some of the simplest questions in mathematics. In school, you were probably taught that parallel lines never meet. That is true in Euclidean geometry, but not in hyperbolic geometry. In projective geometry, parallel lines meet at infinity!

Or consider the infinite decimal 0.9999 . . .  Is this infinite decimal equal to 1? The common sense answer that students usually give is “of course not.” But most mathematicians argue that both numbers are equivalent! Their logic is as follows: in the system of “real numbers,” there is no number between 0.999. . . and 1. Therefore, if you subtract 0.999. . .  from 1, the result is zero. And that means both numbers are the same!

However, in the system of numbers known as “hyperreals,” a system which includes infinitesimals, there exists an infinitesimal number between 0.999. . .  and 1. So under this system, 0.999. . .  and 1 are NOT the same! (A great explanation of this paradox is here.) So which system of numbers is the correct one? There is no consensus among mathematicians. But there is a great joke:

How many mathematicians does it take to screw in a light bulb?

0.999 . . .

The invention of computers has led to the creation of a new system of mathematics known as “floating point arithmetic.” This was necessary because, for all of their amazing capabilities, computers do not have enough memory or processing capability to precisely deal with all of the real numbers. To truly depict an infinite decimal, a computer would need an infinite amount of memory. So floating point arithmetic deals with this problem by using a degree of approximation.

One of the odd characteristics of the standard version of floating point arithmetic is that there is not one zero, but two zeros: a positive zero and a negative zero. What’s that you say? There’s no such thing as positive zero and negative zero? Well, not in the number system you were taught, but these numbers do exist in floating point arithmetic. And you can use them to divide by zero, which is something else I bet you thought you couldn’t do.  One divided by positive zero equals positive infinity, while one divided by negative zero equals negative infinity!

What the history of mathematics indicates is that the world is not converging toward one, true system of mathematics, but creating multiple, incompatible systems of mathematics, each of which has its own logic. If you think of mathematics as a set of tools for understanding reality, rather than reality itself, this makes sense. You want a variety of tools to do different things. Sometimes you need a hammer, sometimes you need a socket wrench, sometimes you need a Phillips screwdriver, etc. The only true test of a tool is how useful it is — a single tool that tried to do everything would be unhelpful.

You probably didn’t know about most of the issues in mathematics I have just mentioned, because they are usually not taught, either at the elementary school level, the high school level, or even college. Mathematics education consists largely of being taught the right way to perform a calculation, and then doing a variety of these calculations over and over and over. . . .

But why is that? Why is mathematics education just about learning to calculate, and not discussing controversies? I can think of several reasons.

One reason may be that most people who go into mathematics tend to have a desire for greater certainty. They don’t like uncertainty and imprecise answers, so they learn math, avoid mathematical controversies or ignore them, and then teach students a mathematics without uncertainty. I recall my college mathematics instructor declaring to class one day that she went into mathematics precisely because it offered sure answers. My teacher certainly had that much in common with Bertrand Russell (quoted above).

Another reason surely is that there is a large element of indoctrination in education generally, and airing mathematical controversies among students might have the effect of undermining authority. It is true that students can discuss controversies in the social sciences and humanities, but that’s because we live in a democratic society in which there are a variety of views on social issues, and no one group has the power to impose a single view on the classroom. But even a democratic society is not interested in teaching controversies in mathematics — it’s interested in creating good workers for the economy. We need people who can make change, draw up a budget, and measure things, not people who challenge widely-accepted beliefs.

This utilitarian view of mathematics education seems to be universal, shared by democratic and totalitarian governments alike. Forcing students to perform endless calculations without allowing them to ask “why” is a great way to bore children and make them hate math, but at least they’ll be obedient citizens.

Belief and Evidence

A common argument by atheists is that belief without evidence is irrational and unjustified, and that those arguing for the existence of God have the burden of proof.  Bertrand Russell famously argued that if one claims that there is a teapot orbiting the sun, the burden of proving the existence of the teapot is on the person who asserts the existence of the teapot, not the denier.  Christopher Hitchens has similarly argued that “What can be asserted without evidence can also be dismissed without evidence.”  Hitchens has advanced this principle even further, arguing that “exceptional claims demand exceptional evidence.”  (god is not Great, pp. 143, 150)  Sam Harris has argued that nearly every evil in human history “can be attributed to an insufficient taste for evidence” and that “We must find our way to a time when faith, without evidence, disgraces anyone who would claim it.”   (The End of Faith, pp. 25, 48)

A demand for evidence is surely a legitimate requirement for most ordinary claims.  But it would be a mistake to turn this rule into a rigid and universal requirement, because many of the issues and problems we encounter in our lives are not always rich with evidence.  Some issues have a wealth of evidence, some issues have a small amount of indirect or circumstantial evidence, some issues have evidence compatible with a variety of radically different conclusions, and some issues have virtually no evidence.  What’s worse is that there appears to be an inverse relationship between the size and importance of the issue one is addressing and the amount of evidence that is available.  The bigger the question one has, the less evidence there is to address it.  The questions of how to obtain a secure and steady supply of food, water, and shelter, how to extend the human lifespan and increase the economic standard of living, all have scientific-technological answers backed by abundant evidence.  Other issues, such as the origins of the universe, the nature of the elementary particles, and the evolution of life, also have large amounts of evidence, albeit with significant gaps in certain details.  But some of the most important questions we face have such a scarcity of evidence that a variety of conflicting beliefs seems inevitable.  Why does the universe exist?  Is there intelligent life on other planets, and if so, how many planets have such life?  Where did the physical laws of the universe come from?  What should we do with our lives?  Will the human race survive the next 1000 years?  Are our efforts to be good people and follow moral codes all in vain?

In cases of scarce evidence, to demand that sufficient evidence exist before forming a belief is to put the cart before the horse.  If one looks at the origins and growth of knowledge in human civilization, belief begins with imagination — only later are beliefs tested and challenged.  Without imagination, there are no hypotheses to test.  In fact, one would not know what evidence to gather if one did not begin with a belief.  Knowledge would never advance.  As the philosopher George Santayana argued in his book Reason and Religion,

A good mythology cannot be produced without much culture and intelligence. Stupidity is not poetical. . . . The Hebrews, denying themselves a rich mythology, remained without science and plastic art; the Chinese, who seem to have attained legality and domestic arts and a tutored sentiment without passing through such imaginative tempests as have harassed us, remain at the same time without a serious science or philosophy. The Greeks, on the contrary, precisely the people with the richest and most irresponsible myths, first conceived the cosmos scientifically, and first wrote rational history and philosophy. So true it is that vitality in any mental function is favourable to vitality in the whole mind. Illusions incident to mythology are not dangerous in the end, because illusion finds in experience a natural though painful cure. . . .  A developed mythology shows that man has taken a deep and active interest both in the world and in himself, and has tried to link the two, and interpret the one by the other. Myth is therefore a natural prologue to philosophy, since the love of ideas is the root of both.

Modern critics of traditional religion are right to argue that we need to revise, reinterpret, or abandon myths when they conflict with new evidence.  As astronomy advanced, it was necessary to abandon the geocentric model of the universe.   As the evidence for evolution accumulated, it was no longer plausible to believe that the universe was created in the extremely short span of six days.  There is a difference between a belief formed in the face of a scarcity of evidence and a belief that goes against an abundance of evidence.  The former is permitted, and is even necessary to advance knowledge; the latter takes knowledge backward.

Today we have reached the point at which science is attempting to answer some very large questions, and science is running up against the limits of what is possible with observation, experimentation, and verification.  Increasingly, the scientific imagination is developing theories that are plausible, but have little or no evidence to back them up; in fact, for many of these theories we will probably never have sufficient evidence.  I am referring here to cosmological theories about the origins of the universe that propose a “multiverse,” that is, a large or even infinite collection of universes that exist alongside our own observable universe.

There are several different types of multiverse theories.  The first type, which many if not most cosmologists accept, proposes multiple universes with the same physical laws and constants as ours, but with different distributions of matter.  A second type, which is more controversial, proposes an infinite number of universes with different physical laws and constants.  A third type, also controversial, arises out of the “many worlds” interpretation of quantum physics — in this view, every time an indeterminate event occurs (say, a six-sided die comes up a “four”), an entirely new universe splits off from our own.  Thus, the most extreme multiverse theories claim that all possibilities exist in some universe, somewhere.  There are even an infinite number of people like you, each with a slight variation in life history (i.e., turning left instead of turning right when leaving the house this morning).

The problem with these theories, however, is that is impossible to obtain solid evidence on the existence of other universes through observation — the universes either exist far beyond the limits of our observable universe, or they reside on a different branch of reality that we cannot reach.  Now it’s not unusual for a scientific theory to predict the existence of particles or forces or worlds that we cannot yet observe; historically, a number of such predictions have proved true when the particle or force or world was finally observed.  But many other predictions have not been proved true.  With the multiverse, it is unlikely that we will have definitive evidence one way or the other.  And a number of scientists have revolted at this development, arguing that cosmology at this level is no longer scientific.  According to physicist Paul Davies,

Extreme multiverse explanations are therefore reminiscent of theological discussions. Indeed, invoking an infinity of unseen universes to explain the unusual features of the one we do see is just as ad hoc as invoking an unseen Creator. The multiverse theory may be dressed up in scientific language, but in essence it requires the same leap of faith.

Likewise, Freeman Dyson insists:

[T]he multiverse is philosophy and not science. Science is about facts that can be tested and mysteries that can be explored, and I see no way of testing hypotheses of the multiverse. Philosophy is about ideas that can be imagined and stories that can be told. I put narrow limits on science, but I recognize other sources of human wisdom going beyond science. Other sources of wisdom are literature, art, history, religion, and philosophy. The multiverse has its place in philosophy and in literature.

Cosmologist George F.R. Ellis, in the August 2011 issue of Scientific American, notes that there are several ways of indirectly testing for the existence of multiple universes, but none are likely to be definitive.  He concludes: “Nothing is wrong with scientifically based philosophical speculation, which is what multiverse proposals are.  But we should name it for what it is.”

Given the thinness of the evidence for extreme multiverse theories, one might ask why modern day atheists do not seem to attack and mock such theorists for believing in something for which they cannot provide solid evidence.  At the very least, Christopher Hitchens’s claim that “exceptional claims require exceptional evidence” would seem to invalidate belief in any multiverse theory.  At best, at some future point we may have indirect or circumstantial evidence for the existence of some other universes; but we are never going to have exceptional evidence for an infinite number of universes consisting of all possibilities.  So why do we not hear of insulting analogies involving orbiting teapots and flying spaghetti monsters when some scientists propose an infinite number of universes based on different physical laws or an infinite number of versions of you?  I think it’s because scientists are respected authority figures in a modern, secular society.  If a scientist says there are multiple universes, we are inclined to believe them even in the absence of solid evidence, because scientists have social prestige, especially among atheists.

Ultimately, there is no solid evidence for the existence of God, no solid evidence for the existence of an infinite variety of universes, and no solid evidence for the existence of other versions of me.  Whether or not one chooses to believe any of these propositions depends on whether one decides to leap into the dark, and which direction one decides to  leap.  This does not mean that any religious belief is permissible — on issues which have abundant evidence, beliefs cannot go against evidence.  Evolution has abundant evidence, as does modern medical science, chemistry, and rocket science.  But where evidence is scarce, and a variety of beliefs are compatible with existing evidence, holding a particular belief cannot be regarded as wholly unjustified and irrational.

 

The Role of Imagination in Science, Part 3

In previous posts (here and here), I argued that mathematics was a product of the human imagination, and that the test of mathematical creations was not how real they were but how useful or valuable they were.

Recently, Russian mathematician Edward Frenkel, in an interview in the Economist magazine, argued the contrary case.  According to Frenkel,

[M]athematical concepts and ideas exist objectively, outside of the physical world and outside of the world of consciousness.  We mathematicians discover them and are able to connect to this hidden reality through our consciousness.  If Leo Tolstoy had not lived we would never had known Anna Karenina.  There is no reason to believe that another author would have written that same novel.  However, if Pythagoras had not lived, someone else would have discovered exactly the same Pythagoras theorem.

Dr. Frenkel goes on to note that mathematical concepts don’t always match to physical reality — Euclidean geometry represents an idealized three-dimensional flat space, whereas our actual universe has curved space.  Nevertheless, mathematical concepts must have an objective reality because “these concepts transcend any specific individual.”

One problem with this argument is the implicit assumption that the human imagination is wholly individualistic and arbitrary, and that if multiple people come up with the same idea, this must demonstrate that the idea exists objectively outside the human mind.  I don’t think this assumption is valid.  It’s perfectly possible for the same idea to be invented by multiple people independently.  Surely if Thomas Edison never lived, someone else would have invented the light bulb.   Does that mean that the light bulb is not a true creation of the imagination, that it was not invented but always existed “objectively” before Edison came along and “discovered” it?  I don’t think so.  Likewise with modern modes of ground transportation, air transportation, manufacturing technology, etc.  They’re all apt to be imagined and invented by multiple people working independently; it’s just that laws on copyright and patent only recognize the first person to file.

It’s true that in other fields of human knowledge, such as literature, one is more likely to find creations that are truly unique.  Yes, Anna Karenina is not likely to be written by someone else in the absence of Tolstoy.  However, even in literature, there are themes that are universal; character names and specific plot developments may vary, but many stories are variations on the same theme.  Consider the following story: two characters from different social groups meet and fall in love; the two social groups are antagonistic toward each other and would disapprove of the love; the two lovers meet secretly, but are eventually discovered; one or both lovers die tragically.  Is this not the basic plot of multiple stories, plays, operas, and musicals going back two thousand years?

Dr. Frenkel does admit that not all mathematical concepts correspond to physical reality.  But if there is not a correspondence to something in physical reality, what does it mean to say that a mathematical concept exists objectively?  How do we prove something exists objectively if it is not in physical reality?

If one looks at the history of mathematics, there is an intriguing pattern in which the earliest mathematical symbols do indeed seem to point to or correspond to objects in physical reality; but as time went on and mathematics advanced, mathematical concepts became more and more creative and distant from physical reality.  These later mathematical concepts were controversial among mathematicians at first, but later became widely adopted, not because someone proved they existed, but because the concepts seemed to be useful in solving problems that could not be solved any other way.

The earliest mathematical concepts were the “natural numbers,” the numbers we use for counting (1, 2, 3 . . .).  Simple operations were derived from these natural numbers.  If I have two apples and add three apples, I end up with five apples.  However, the number zero was initially controversial — how can nothing be represented by something?  The ancient Greeks and Romans, for all of their impressive accomplishments, did not use zero, and the number zero was not adopted in Europe until the Middle Ages.

Negative numbers were also controversial at first.  How can one have “negative two apples” or a negative quantity of anything?  However, it became clear that negative numbers were indeed useful conceptually.  If I have zero apples and borrow two apples from a neighbor, according to my mental accounting book, I do indeed have “negative two apples,” because I owe two apples to my neighbor.  It is an accounting fiction, but it is a useful and valuable fiction.  Negative numbers were invented in ancient China and India, but were rejected by Western mathematicians and were not widely accepted in the West until the eighteenth century.

The set of numbers known explicitly as “imaginary numbers” was even more controversial, since it involved a quantity which, when squared, results in a negative number.  Since there is no known number that allows such an operation, the imaginary numbers were initially derided.  However, imaginary numbers proved to be such a useful conceptual tool in solving certain problems, they gradually became accepted.   Imaginary numbers have been used to solve problems in electric current, quantum physics, and envisioning rotations in three dimensions.

Professor Stephen Hawking has used imaginary numbers in his own work on understanding the origins of the universe, employing “imaginary time” in order to explore what it might be like for the universe to be finite in time and yet have no real boundary or “beginning.”  The potential value of such a theory in explaining the origins of the universe leads Professor Hawking to state the following:

This might suggest that the so-called imaginary time is really the real time, and that what we call real time is just a figment of our imaginations.  In real time, the universe has a beginning and an end at singularities that form a boundary to space-time and at which the laws of science break down.  But in imaginary time, there are no singularities or boundaries.  So maybe what we call imaginary time is really more basic, and what we call real is just an idea that we invent to help us describe what we think the universe is like.  But according to the approach I described in Chapter 1, a scientific theory is just a mathematical model we make to describe our observations: it exists only in our minds.  So it is meaningless to ask: which is real, “real” or “imaginary” time?  It is simply a matter of which is the more useful description.  (A Brief History of Time, p. 144.)

If you have trouble understanding this passage, you are not alone.  I have a hard enough time understanding imaginary numbers, let alone imaginary time.  The main point that I wish to underline is that even the best theoretical physicists don’t bother trying to prove that their conceptual tools are objectively real; the only test of a conceptual tool is if it is useful.

As a final example, let us consider one of the most intriguing of imaginary mathematical objects, the “hypercube.”  A hypercube is a cube that extends into additional dimensions, beyond the three spatial dimensions of an ordinary cube.  (Time is usually referred to as the “fourth dimension,” but in this case we are dealing strictly with spatial dimensions.)  A hypercube can be imagined in four dimensions, five dimensions, eight dimensions, twelve dimensions — in fact, there is no limit to the number of dimensions a hypercube can have, though the hypercube gets increasingly complex and eventually impossible to visualize as the number of dimensions increases.

Does a hypercube correspond to anything in physical reality?  Probably not.  While there are theories in physics that posit five, eight, ten, or even twenty-six spatial dimensions, these theories also posit that the additional spatial dimensions beyond our third dimension are curved up in very, very small spaces.  How small?  A million million million million millionth of an inch, according to Stephen Hawking (A Brief History of Time, p. 179).  So as a practical matter, hypercubes could exist only on the most minute scale.  And that’s probably a good thing, as Stephen Hawking points out, because in a universe with four fully-sized spatial dimensions, gravitational forces would become so sensitive to minor disturbances that planetary systems, stars, and even atoms would fly apart or collapse (pp. 180-81).

Dr. Frenkel would admit that hypercubes may not correspond to anything in physical reality.  So how do hypercubes exist?  Note that there is no limit to how many dimensions a hypercube can have.  Does it make sense to say that the hypercube consisting of exactly 32,458 dimensions exists objectively out there somewhere, waiting for someone to discover it?   Or does it make more sense to argue that the hypercube is an invention of the human imagination, and can have as many dimensions as can be imagined?  I’m inclined to the latter view.

Many scientists insist that mathematical objects must exist out there somewhere because they’ve been taught that a good scientist must be objective and dedicate him or herself to the discovery of things that exist independently of the human mind.  But there’re too many mathematical ideas that are clearly products of the human mind, and they’re too useful to abandon merely because they are products of the mind.

The Role of Imagination in Science, Part 2

In a previous posting, we examined the status of mathematical objects as creations of the human mind, not objectively existing entities.  We also discussed the fact that the science of geometry has expanded from a single system to a great many systems, with no single system being true.  So what prevents mathematics from falling into nihilism?

Many people seem to assume that if something is labeled as “imaginary,” it is essentially arbitrary or of no consequence, because it is not real.  If something is a “figment of imagination” or “exists only in your mind,” then it is of no value to scientific knowledge.  However, two considerations impose limits or restrictions on imagination that prevent descent into nihilism.

The first consideration is that even imaginary objects have properties that are real or unavoidable, once they are proposed.  In The Mathematical Experience, mathematics professors Philip J. Davis and Reuben Hersh argue that mathematics is the study of “true facts about imaginary objects.”  This may be a difficult concept to grasp (it took me a long time to grasp it), but consider some simple examples:

Imagine a circle in your mind.  Got that?  Now imagine a circle in which the radius of the circle is greater than the circumference of the circle.  If you are imagining correctly, it can’t be done.  Whether or not you know that the circumference of a circle is equal to twice the radius times pi, you should know that the circumference of a circle is always going to be larger than the radius.

Now imagine a right triangle.  Can you imagine a right triangle with a hypotenuse that is shorter than either of the two other sides?  No, whether or not you know the Pythagorean theorem, it’s in the very nature of a right triangle to have a hypotenuse that is longer than either of the two remaining sides.  This is what we mean by “true facts about imaginary objects.”  Once you specify an imagined object with certain basic properties, other properties follow inevitably from those initial, basic properties.

The second consideration that puts restrictions on the imagination is this: while it may be possible to invent an infinite number of mathematical objects, only a limited number of those objects is going to be of value.  What makes a mathematical object of value?  In fact, there are multiple criteria for valuing mathematical objects, some of which may conflict with each other.

The most important criterion of mathematical objects according to scientists is the ability to predict real-world phenomena.  Does a particular equation or model allow us to predict the motion of stars and planets; or the multiplication of life forms; or the growth of a national economy?  This ability to predict is a most powerful attribute of mathematics — without it, it is not likely that scientists would bother using mathematics at all.

Does the ability to predict real-world phenomena demonstrate that at least some mathematical objects, however imaginary, at least correspond to or model reality?  Yes — and no.  For in most cases it is possible to choose from a number of different mathematical models that are approximately equal in their ability to predict, and we are still compelled to refer to other criteria in choosing which mathematical object to use.  In fact, there are often tradeoffs when evaluating various criteria — often, so single mathematical object is best on all criteria.

One of the most important criteria after predictive ability is simplicity.  Although it has been demonstrated that Euclidean geometry is not the only type of geometry, it is still widely used because it is the simplest.  In general, scientists like to begin with the simplest model first; if that model becomes inadequate in predicting real-world events, they modify the model or choose a new one.  There is no point in starting with an unnecessarily complex geometry, and when one’s model gets too complex, the chance of error increases significantly.  In fact, simplicity is regarded as an important aspect of mathematical beauty — a mathematical proof that is excessively long and complicated is considered ugly, while a simple proof that provides answers with few steps is beautiful.

Another criterion for choosing one mathematical object over another is scope or comprehensiveness.  Does the mathematical object apply only in limited, specific circumstances?  Or does it apply broadly to phenomena, tying together multiple events under a single model?

There is also the criterion of fruitfulness.  Is the model going to provide many new research findings?  Or is it going to be limited to answering one or two questions, providing no basis for additional progress?

Ultimately, it’s impossible to get away from value judgments when evaluating mathematical objects.  Correspondence to reality cannot be the only value.  Why do we use the Hindu-Arabic numeral system today and not the Roman numeral system?  I don’t think it makes sense to say that the Hindu-Arabic system corresponds to reality more accurately than the Roman numeral system.  Rather, the Hindu-Arabic numeral system is easier to use for many calculations, and it is more powerful in obtaining useful results.  Likewise a base 10 numeral system doesn’t correspond to reality more accurately than a base 2 numeral system — it’s just easier for humans to use a base 10 system.  For computers, it is easier to use a base 2 system.  A base 60 system, such as the ancient Babylonians used, is more difficult for many calculations than a base 10, but it is more useful in measuring time and angles.  Why?  Because 60 has so many divisors (1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60) it can express fractions of units more simply, which is why we continue to use a modified version of base 60 for measuring time and angles (and geographic coordinates) to this day.

What about mathematical objects that don’t predict real world events or appear to model anything in reality at all?  This is the realm of pure mathematics, and some mathematicians prefer this realm to the realm of applied mathematics.  Do we make fun of pure mathematicians for wasting time on purely imaginary objects?  No, pure mathematics is still a form of knowledge, and mathematicians still seek beauty in mathematics.

Ultimately, imaginative knowledge is not arbitrary or inconsequential; there are real limits even for the imagination.  There may be an infinite number of mathematical systems that can be imagined, but only a limited number will be good.  Likewise, there is an infinite variety of musical compositions, paintings, and novels that can be created by the imagination, but only a limited number will be good, and only a very small number will be truly superb.  So even the imagination has standards, and these standards apply as much to the sciences as to the arts.

The Role of Imagination in Science, Part 1

In Zen and the Art of Motorcycle Maintenance, author Robert Pirsig argues that the basic conceptual tools of science, such as the number system, the laws of physics, and the rules of logic, have no objective existence, but exist in the human mind.  These conceptual tools were not “discovered” but created by the human imagination.  Nevertheless we use these concepts and invent new ones because they are good — they help us to understand and cope with our environment.

As an example, Pirsig points to the uncertain status of the number “zero” in the history of western culture.  The ancient Greeks were divided on the question of whether zero was an actual number – how could nothing be represented by something? – and did not widely employ zero.  The Romans’ numerical system also excluded zero.  It was only in the Middle Ages that the West finally adopted the number zero by accepting the Hindu-Arabic numeral system.  The ancient Greek and Roman civilizations did not neglect zero because they were blind or stupid.  If future generations adopted the use of zero, it was not because they suddenly discovered that zero existed, but because they found the number zero useful.

In fact, while mathematics appears to be absolutely essential to progress in the sciences, mathematics itself continues to lack objective certitude, and the philosophy of mathematics is plagued by questions of foundations that have never been resolved.  If asked, the majority of mathematicians will argue that mathematical objects are real, that they exist in some unspecified eternal realm awaiting discovery by mathematicians; but if you follow up by asking how we know that this realm exists, how we can prove that mathematical objects exist as objective entities, mathematicians cannot provide an answer that is convincing even to their fellow mathematicians.  For many decades, according to mathematicians Philip J. Davis and Reuben Hersh, the brightest minds sought to provide a firm foundation for mathematical truth, only to see their efforts founder (“Foundations , Found and Lost,” in The Mathematical Experience).

In response to these failures, mathematicians divided into multiple camps.  While the majority of mathematicians still insisted that mathematical objects were real, the school of fictionalism claimed that all mathematical objects were fictional.  Nevertheless, the fictionalists argued that mathematics was a useful fiction, so it was worthwhile to continue studying mathematics.  In the school of formalism, mathematics is described as a set of statements of the consequences of following certain rules of the game — one can create many “games,” and these games have different outcomes resulting from different sets of rules, but the games may not be about anything real.  The school of finitism argues that only the natural numbers (i.e., numbers for counting, such as 1, 2, 3. . . ) and numbers that can be derived from the natural numbers are real, all other numbers are creations of the human mind.  Even if one dismisses these schools as being only a minority, the fact that there is such stark disagreement among mathematicians about the foundations of mathematics is unsettling.

Ironically, as mathematical knowledge has increased over the years, so has uncertainty.  For many centuries, it was widely believed that Euclidean geometry was the most certain of all the sciences.  However, by the late nineteenth century, it was discovered that one could create different geometries that were just as valid as Euclidean geometry — in fact, it was possible to create an infinite number of valid geometries.  Instead of converging on a single, true geometry, mathematicians have seemingly gone into all different directions.  So what prevents mathematics from falling into complete nihilism, in which every method is valid and there are no standards?  This is an issue we will address in a subsequent posting.