What is “Mythos” and “Logos”?

The terms “mythos” and “logos” are used to describe the transition in ancient Greek thought from the stories of gods, goddesses, and heroes (mythos) to the gradual development of rational philosophy and logic (logos). The former is represented by the earliest Greek thinkers, such as Hesiod and Homer; the latter is represented by later thinkers called the “pre-Socratic philosophers” and then Socrates, Plato, and Aristotle. (See the book: From Myth to Reason? Studies in the Development of Greek Thought).

In the earliest, “mythos” stage of development, the Greeks saw events of the world as being caused by a multitude of clashing personalities — the “gods.” There were gods for natural phenomena such as the sun, the sea, thunder and lightening, and gods for human activities such as winemaking, war, and love. The primary mode of explanation of reality consisted of highly imaginative stories about these personalities. However, as time went on, Greek thinkers became critical of the old myths and proposed alternative explanations of natural phenomena based on observation and logical deduction. Under “logos,” the highly personalized worldview of the Greeks became transformed into one in which natural phenomena were explained not by invisible superhuman persons, but by impersonal natural causes.

However, many scholars argue that there was not such a sharp distinction between mythos and logos historically, that logos grew out of mythos, and elements of mythos remain with us today.

For example, ancient myths provided the first basic concepts used subsequently to develop theories of the origins of the universe. We take for granted the words that we use every day, but the vast majority of human beings never invent a single word or original concept in their lives — they learn these things from their culture, which is the end-product of thousands of years of speaking and writing by millions of people long-dead. The very first concepts of “cosmos,” “beginning,” nothingness,” and differentiation from a single substance — these were not present in human culture for all time, but originated in ancient myths. Subsequent philosophers borrowed these concepts from the myths, while discarding the overly-personalistic interpretations of the origins of the universe. In that sense, mythos provided the scaffolding for the growth of philosophy and modern science. (See Walter Burkert, “The Logic of Cosmogony” in From Myth to Reason: Studies in the Development of Greek Thought.)

An additional issue is the fact that not all myths are wholly false. Many myths are stories that communicate truths even if the characters and events in the story are fictional. Socrates and Plato denounced many of the early myths of the Greeks, but they also illustrated philosophical points with stories that were meant to serve as analogies or metaphors. Plato’s allegory of the cave, for example, is meant to illustrate the ability of the educated human to perceive the true reality behind surface impressions. Could Plato have made the same philosophical point in a literal language, without using any stories or analogies? Possibly, but the impact would be less, and it is possible that the point would not be effectively communicated at all.

Some of the truths that myths communicate are about human values, and these values can be true even if the stories in which the values are embedded are false. Ancient Greek religion contained many preposterous stories, and the notion of personal divine beings directing natural phenomena and intervening in human affairs was false. But when the Greeks built temples and offered sacrifices, they were not just worshiping personalities — they were worshiping the values that the gods represented. Apollo was the god of light, knowledge, and healing; Hera was the goddess of marriage and family; Aphrodite was the goddess of love; Athena was the goddess of wisdom; and Zeus, the king of the gods, upheld order and justice. There’s no evidence at all that these personalities existed or that sacrifices to these personalities would advance the values they represented. But a basic respect for and worshipful disposition toward the values the gods represented was part of the foundation of ancient Greek civilization. I don’t think it was a coincidence that the city of Athens, whose patron goddess was Athena, went on to produce some of the greatest philosophers the world has seen — love of wisdom is the prerequisite for knowledge, and that love of wisdom grew out of the culture of Athens. (The ancient Greek word philosophia literally means “love of wisdom.”)

It is also worth pointing out that worship of the gods, for all of its superstitious aspects, was not incompatible with even the growth of scientific knowledge. Modern western medicine originated in the healing temples devoted to the god Asclepius, the son of Apollo, and the god of medicine. Both of the great ancient physicians Hippocrates and Galen are reported to have begun their careers as physicians in the temples of Asclepius, the first hospitals. Hippocrates is widely regarded as the father of western medicine and Galen is considered the most accomplished medical researcher of the ancient world. As love of wisdom was the prerequisite for philosophy, reverence for healing was the prerequisite for the development of medicine.

Karen Armstrong has written that ancient myths were never meant to be taken literally, but were “metaphorical attempts to describe a reality that was too complex and elusive to express in any other way.” (A History of God) I am not sure that’s completely accurate. I think it most likely that the mass of humanity believed in the literal truth of the myths, while educated human beings understood the gods to be metaphorical representations of the good that existed in nature and humanity. Some would argue that this use of metaphors to describe reality is deceptive and unnecessary. But a literal understanding of reality is not always possible, and metaphors are widely used even by scientists.

Theodore L. Brown, a professor emeritus of chemistry at the University of Illinois at Urbana-Champaign, has provided numerous examples of scientific metaphors in his book, Making Truth: Metaphor in Science. According to Brown, the history of the human understanding of the atom, which cannot be directly seen, began with a simple metaphor of atoms as billiard balls; later, scientists compared atoms to plum pudding; then they compared the atom to our solar system, with electrons “orbiting” around a nucleus. There has been a gradual improvement in our models of the atom over time, but ultimately, there is no single, correct literal representation of the atom. Each model illustrates an aspect or aspects of atomic behavior — no one model can capture all aspects accurately. Even the notion of atoms as particles is not fully accurate, because atoms can behave like waves, without a precise position in space as we normally think of particles as having. The same principle applies to models of the molecule as well. (Brown, chapters, 4-6)  A number of scientists have compared the imaginative construction of scientific models to map-making — there is no single, fully accurate way to map the earth (using a flat surface to depict a sphere), so we are forced to use a variety of maps at different scales and projections, depending on our needs.

Sometimes the visual models that scientists create are quite unrealistic. The model of the “energy landscape” was created by biologists in order to understand the process of protein folding — the basic idea was to imagine a ball rolling on a surface pitted with holes and valleys of varying depth. As the ball would tend to seek out the low points on the landscape (due to gravity), proteins would tend to seek the lowest possible free energy state. All biologists know the energy landscape model is a metaphor — in reality, proteins don’t actually go rolling down hills! But the model is useful for understanding a process that is highly complex and cannot be directly seen.

What is particularly interesting is that some of the metaphorical models of science are frankly anthropomorphic — they are based on qualities or phenomena found in persons or personal institutions. Scientists envision cells as “factories” that accept inputs and produce goods. The genetic structure of DNA is described as having a “code” or “language.” The term “chaperone proteins” was invented to describe proteins that have the job of assisting other proteins to fold correctly; proteins that don’t fold correctly are either treated or dismantled so that they do not cause damage to the larger organism — a process that has been given a medical metaphor: “protein triage.” (Brown, chapters 7-8) Even referring to the “laws of physics” is to use a metaphorical comparison to human law. So even as logos has triumphed over the mythos conception that divine personalities rule natural phenomena, qualities associated with personal beings have continued to sneak into modern scientific models.

The transition of a mythos-dominated worldview to a logos-dominated worldview was a stupendous achievement of the ancient Greeks, and modern philosophy, science, and civilization would not be possible without it. But the transition did not involve a complete replacement of one worldview with another, but rather the building of additional useful structures on top of a simple foundation. Logos grew out of its origins in mythos, and retains elements of mythos to this day. The compatibilities and conflicts between these two modes of thought are the thematic basis of this website.

Related: A Defense of the Ancient Greek Pagan Religion

Einstein’s Judeo-Quaker Pantheism

I recently came across a fascinating website, Einstein: Science and Religion, which I hope you will find time to peruse.  The website, edited by Arnold Lesikar, Professor Emeritus in the  Department of Physics, Astronomy, and Engineering Science at St. Cloud State University in Minnesota, contains a collection of Einstein’s various comments on religion, God, and the relationship between science and religion.

Einstein’s views on religion have been frequently publicized and commented on, but it is difficult to get an accurate and comprehensive assessment of Einstein’s actual views on religion because of the tendency of both believers and atheists to cherry-pick particular quotations or to quote out of context. Einstein’s actual views on religion are complex and multifaceted, and one is apt to get the wrong impression by focusing on just one or several of Einstein’s comments.

One should begin by noting that Einstein did not accept the notion of a personal God, an omnipotent superbeing who listens to our prayers and intervenes in the operations of the laws of the universe. Einstein repeatedly rejected this notion of God throughout his life, from his adolescence to old age. He also believed that many, if not most, of the stories in the Bible were untrue.

The God Einstein did believe in was the God of the philosopher Spinoza. Spinoza conceived of God as being nothing more than the natural order underlying this universe — this order was fundamentally an intelligent order, but it was a mistake to conceive of God as having a personality or caring about man. Spinoza’s view was known as pantheism, and Einstein explicitly stated that he was a proponent of Spinoza and of pantheism. Einstein also argued that ethical systems were a purely human concern, with no superhuman authority figure behind them, and there was no afterlife in which humans could be rewarded or punished. In fact, Einstein believed that immortality was undesirable anyway. Finally, Einstein sometimes expressed derogatory views of religious institutions and leaders, believing them responsible for superstition and bigotry among the masses.

However, it should also be noted that Einstein’s skepticism and love of truth was too deep to result in a rigid and dogmatic atheism. Einstein described himself variously as an agnostic or pantheist and disliked the arrogant certainty of atheists. He even refused to definitively reject the idea of a personal God, believing that there were too many mysteries behind the universe to come to any final conclusions about God. He also wrote that he did not want to destroy the idea of a personal God in the minds of the masses, because even a primitive metaphysics was better than no metaphysics at all.

Even while rejecting the notion of a personal God, Einstein described God as a spirit, a spirit with the attribute of thought or intelligence: “[E]very one who is seriously involved in the pursuit of science becomes convinced that a spirit is manifest in the laws of the Universe — a spirit vastly superior to that of man, and one in the face of which we with our modest powers must feel humble.” In an interview, Einstein expressed a similar view:

If there is any such concept as a God, it is a subtle spirit, not an image of a man that so many have fixed in their minds. In essence, my religion consists of a humble admiration for this illimitable superior spirit that reveals itself in the slight details that we are able to perceive with our frail and feeble minds.

Distinguishing between the religious feeling of the “naïve man” and the religious feeling of the scientist, Einstein argued:  “[The scientist’s] religious feeling takes the form of a rapturous amazement at the harmony of natural law, which reveals an intelligence of such superiority that, compared with it, all the systematic thinking and acting of human beings is an utterly insignificant reflection.”

While skeptical and often critical of religious institutions, Einstein also believed that religion played a valuable and necessary role for civilization in creating “superpersonal goals” for human beings, goals above and beyond self-interest, that could not be established by pure reason.  Reason could provide us with the facts of existence, said Einstein, but the question of how we should live our lives necessarily required going beyond reason. According to Einstein:

[T]he scientific method can teach us nothing else beyond how facts are related to, and conditioned by, each other.The aspiration toward such objective knowledge belongs to the highest of which man is capabIe, and you will certainly not suspect me of wishing to belittle the achievements and the heroic efforts of man in this sphere. Yet it is equally clear that knowledge of what is does not open the door directly to what should be. . . . Objective knowledge provides us with powerful instruments for the achievements of certain ends, but the ultimate goal itself and the longing to reach it must come from another source. . . .

To make clear these fundamental ends and valuations, and to set them fast in the emotional life of the individual, seems to me precisely the most important function which religion has to perform in the social life of man. And if one asks whence derives the authority of such fundamental ends, since they cannot be stated and justified merely by reason, one can only answer: they exist in a healthy society as powerful traditions, which act upon the conduct and aspirations and judgments of the individuals; they are there, that is, as something living, without its being necessary to find justification for their existence. They come into being not through demonstration but through revelation, through the medium of powerful personalities. One must not attempt to justify them, but rather to sense their nature simply and clearly.

Einstein even argued that the establishment of moral goals by religious prophets was one of the most important accomplishments of humanity, eclipsing even scientific accomplishment:

Our time is distinguished by wonderful achievements in the fields of scientific understanding and the technical application of those insights. Who would not be cheered by this? But let us not forget that knowledge and skills alone cannot lead humanity to a happy and dignified life. Humanity has every reason to place the proclaimers of high moral standards and values above the discoverers of objective truth. What humanity owes to personalities like Buddha, Moses, and Jesus ranks for me higher than all the achievements of the enquiring and constructive mind.

Einstein’s views of Jesus are particularly intriguing. Einstein never rejected his Jewish identity and refused all attempts by others to convert him to Christianity. Einstein also refused to believe the stories of Jesus’s alleged supernatural powers. But Einstein also believed the historical existence of Jesus was a fact, and Einstein regarded Jesus as one the greatest — if not the greatest — of religious prophets:

As a child, I received instruction both in the Bible and in the Talmud. I am a Jew, but I am enthralled by the luminous figure of the Nazarene. . . . No one can read the Gospels without feeling the actual presence of Jesus. His personality pulsates in every word. No myth is filled with such life. How different, for instance, is the impression which we receive from an account of legendary heroes of antiquity like Theseus. Theseus and other heroes of his type lack the authentic vitality of Jesus. . . .No man can deny the fact that Jesus existed, nor that his sayings are beautiful. Even if some them have been said before, no one has expressed them so divinely as he.

Toward the end of his life, Einstein, while remaining Jewish, expressed great admiration for the Christian sect known as the Quakers. Einstein stated that the “Society of Friends,” as the Quakers referred to themselves as, had the “highest moral standards” and their influence was “very beneficial.” Einstein even declared “If I were not a Jew I would be a Quaker.”

Now Einstein’s various pronouncements on religion are scattered in multiple sources, so it is not surprising that people may get the wrong impression from examining just a few quotes. Sometimes stories of Einstein’s religious views are simply made up, implying that Einstein was a traditional believer. Other times, atheists will emphasize Einstein’s rejection of a personal God, while completely overlooking Einstein’s views on the limits of reason, the necessity of religion in providing superpersonal goals, and the value of the religious prophets.

For some people, a religion without a personal God is not a true religion. But historically, a number of major religions do not hold belief in a personal God as central to their belief system, including Taoism, Buddhism, and Confucianism. In addition, many theologians in monotheistic faiths describe God in impersonal terms, or stress that the attributes of God may be represented symbolically as personal, but that God himself cannot be adequately described as a person. The great Jewish theologian Maimonides argued that although God had been described allegorically and imperfectly by the prophets as having the attributes of a personal being, God did not actually have human thoughts and emotions. The twentieth century Christian theologian Paul Tillich argued that God was not “a being” but the “Ground of Being” or the “Power of Being” existing in all things.

However, it is somewhat odd is that while rejecting the notion of a personal God, Einstein saw God as a spirit that seemingly possessed an intelligence far greater than that of human beings. In that, Einstein was similar to Spinoza, who believed God had the attribute of “thought” and that the human mind was but part of the “infinite intellect of God.”  But is not intelligence a quality of personal beings? In everyday life, we don’t think of orbiting planets or stars or rocks or water as possessing intelligence, and even if we attribute intelligence to lower forms of life such as bacteria and plants, we recognize that this sort of intelligence is primitive. If you ask people what concrete, existing things best possess the quality of intelligence, they will point to humans — personal beings! Yet, both Spinoza and Einstein attribute vast, or even infinite, intelligence to God, while denying that God is a personal being!

I am not arguing that Spinoza and Einstein were wrong or somehow deluding themselves when they argued that God was not a personal being. I am simply pointing out how difficult it is to adequately and accurately describe God. I think Spinoza and Einstein were correct in seeking to modify the traditional concept of God as a type of omnipotent superperson with human thoughts and emotions. But at the same time, it can be difficult to describe God in a way that does not use attributes that are commonly thought of as belonging to personal beings. At best, we can use analogies from everyday experience to indirectly describe God, while acknowledging that all analogies fall short.

 

What Are the Laws of Nature? – Part Two

In a previous post, I discussed the mysterious status of the “laws of nature,” pointing out that these laws seem to be eternal, omnipresent, and possessing enormous power to shape the universe, although they have no mass and no energy.

There is, however, an alternative view of the laws of nature proposed by thinkers such as Ronald Giere and Nancy Cartwright, among others. In this view, it is a fallacy to suppose that the laws of nature exist as objectively real entities — rather, what we call the laws of nature are simplified models that the human mind creates to explain and predict the operations of the universe. The laws were created by human beings to organize information about the cosmos. As such, the laws are not fully accurate descriptions of how the universe actually works, but generalizations; and like nearly all generalizations, there are numerous exceptions when the laws are applied to particular circumstances. We retain the generalizations because they excel at organizing and summarizing vast amounts of information, but we should never make the mistake of assuming that the generalizations are real entities. (See Science Without Laws and How the Laws of Physics Lie.)

Consider one of the most famous laws of nature, Isaac Newton’s law of universal gravitation. According to this law, the gravitational relationship between any two bodies in the universe is determined by the size (mass) of the two bodies and their distance from each other. More specifically, any two bodies in the universe attract each other with a force that is (1) directly proportional to the product of their masses and (2) inversely proportional to the square of the distance between them.  The equation is quite simple:

F = G \frac{m_1 m_2}{r^2}\

where F is the force between two masses, G is a gravitational constant, m1 and m2 are the masses of the two bodies and r is the distance between the center of the two bodies.

Newton’s law was quite valuable in helping predict the motions of the planets in our solar system, but in some cases the formula did not quite match to astronomical observations. The orbit of the planet Mercury in particular never fit Newton’s law, no matter how much astronomers tried to fiddle with the law to get the right results. It was only when Einstein introduced his theory of relativity that astronomers could correctly predict the motions of all the planets, including Mercury. Why did Einstein’s theory work better for Mercury? Because as the planet closest to the sun, Mercury is most affected by the massive gravitation of the sun, and Newton’s law becomes less accurate under the conditions of massive gravitation.

Einstein’s equations for gravity are known as the “field equations,” and although they are better at predicting the motions of the planets, they are extremely complex — too complex really for many situations. In fact, physicist Stephen Hawking has noted that scientists still often use Newton’s law of gravity because it is much simpler and a good enough approximation in most cases.

So what does this imply about the reality of Newton’s law of universal gravitation? Does Newton’s law float around in space or in some transcendent realm directing the motions of the planets, until the gravitation becomes too large, and then it hands off its duties to the Einstein field equations? No, of course not. Newton’s law is an approximation that works for many, but not all cases. Physicists use it because it is simple and “good enough” for most purposes. When the approximations become less and less accurate, a physicist may switch to the Einstein field equations, but this is a human value judgment, not the voice of nature making a decision to switch equations.

One other fact is worth noting: in Newton’s theory, gravity is a force between two bodies. In Einstein’s theory, gravity is not a real force — what we call a gravitational force is simply how we perceive the distortion of the space-time fabric caused by massive objects. Physicists today refer to gravity as a “fictitious force.” So why do professors of physics continue to use Newton’s law and teach this “fictitious force” law to their students? Because it is simpler to use and still a good enough approximation for most cases. Newton’s law can’t possibly be objectively real — if it is, Einstein is wrong.

The school of thought known as “scientific realism” would dispute these claims, arguing that even if the laws of nature as we know them are approximations, there are still real, objective laws underneath these approximations, and as science progresses, we are getting closer and closer to knowing what these laws really are. In addition, they argue that it would be absurd to suppose that we can possibly make progress in technology unless we are getting better and better in knowing what the true laws are really like.

The response of Ronald Giere and Nancy Cartwright to the realists is as follows: it’s a mistake to assume that if our laws are approximations and our approximations are getting better and better that therefore there must be real laws underneath. What if nature is inherently so complex in its causal variables and sequences that there is no objectively real law underneath it all? Nancy Cartwright notes that engineers who must build and maintain technological devices never apply the “laws of nature” directly to their work without a great deal of tinkering and modifications to get their mental models to match the specific details of their device. The final blueprint that engineers may create is a highly specific and highly complex model that is a precise match for the device, but of very limited generalizability to the universe as a whole. In other words, there is an inherent and unavoidable tradeoff between explanatory power and accuracy. The laws of nature are valued by us because they have very high explanatory power, but specific circumstances are always going to involve a mix of causal forces that refute the predictions of the general law. In order to understand how two bodies behave, you not only need to know gravity, you need to know the electric charge of the two bodies, the nuclear force, any chemical forces, the temperature, the speed of the objects, and additional factors, some of which can never be calculated precisely. According to Cartwright,

. . . theorists tend to think that nature is well-regulated; in the extreme, that there is a law to cover every case. I do not. I imagine that natural objects are much like people in societies. Their behavior is constrained by some specific laws and by a handful of general principles, but it is not determined in detail, even statistically. What happens on most occasions is dictated by no law at all. . . . God may have written just a few laws and grown tired. We do not know whether we are living in a tidy universe or an untidy one. (How the Laws of Physics Lie, p. 49)

Cartwright makes it clear that she believes in causal powers in nature — it’s just that causal powers are not the same as laws, which are simply general principles for organizing information.

Some philosophers and scientists would go even further. They argue that science is able to develop and improve models for predicting phenomena, but the underlying nature of reality cannot be grasped directly, even if our models are quite excellent at predicting. This is because there are always going to be aspects of nature that are non-observable and there are often multiple theories that can explain the same phenomenon. This school of thought is known as instrumentalism.

Stephen Hawking appears to be sympathetic to such a view. In a discussion of his use of “imaginary time” to model how the universe developed, Hawking stated “a scientific theory is just a mathematical model we make to describe our observations: it exists only in our minds. So it is meaningless to ask: which is real, “real” or “imaginary” time? It is simply a matter of which is the more useful description.” (A Brief History of Time, p. 144) In a later essay, Hawking made the case for what he calls “model-dependent realism.” He argues:

it is pointless to ask whether a model is real, only whether it agrees with observation. If two models agree with observation, neither one can be considered more real than the other. A person can use whichever model is more convenient in the situation under consideration. . . . Each theory may have its own version of reality, but according to model-dependent realism, that diversity is acceptable, and none of the versions can be said to be more real than any other.

Hawking concludes that given these facts, it may well be impossible to develop a unified theory of everything, that we may have to settle for a diversity of models. (It’s not clear to me how Hawking’s “model-dependent realism” differs from instrumentalism, since they seem to share many aspects.)

Intuitively, we are apt to conclude that our progress in technology is proof enough that we are understanding reality better and better, getting closer and closer to the Truth. But it’s actually quite possible for science to develop better and better predictive models while still retaining very serious doubts and disputes about many fundamental aspects of reality. Among physicists and cosmologists today, there is still disagreement on the following issues: are there really such things as subatomic particles, or are these entities actually fields, or something else entirely?; is the flow of time an illusion, or is time the chief fundamental reality?; are there an infinite number of universes in a wider multiverse, with infinite versions of you, or is this multiverse theory a mistaken interpretation of uncertainty at the quantum level?; are the constants of the universe really constant, or do they sometimes change?; are mathematical objects themselves the ultimate reality, or do they exist only in the mind? A number of philosophers of science have concluded that science does indeed progress by creating more and better models for predicting, but they make an analogy to evolution: life forms may be advancing and improving, but that doesn’t mean they are getting closer and closer to some final goal.

Referring back to my previous post, I discussed the view that the “laws of nature” appear to exist everywhere and have the awesome power to shape the universe and direct the motions of the stars and planets, despite the fact that the laws themselves have no matter and no energy. But if the laws of nature are creations of our minds, what then? I can’t prove that there are no real laws behind the mental models that we create. It seems likely that there must be some such laws, but perhaps they are so complex that the best we can do is create simplified models of them. Or perhaps we must acknowledge that the precise nature of the cosmological order is mysterious, and any attempt to understand and describe this order must use a variety of concepts, analogies, and stories created by our minds. Some of these concepts, analogies, and stories are clearly better than others, but we will never find one mental model that is a perfect fit for all aspects of reality.

What Are the Laws of Nature?

According to modern science, the universe is governed by laws, and it is the job of scientists to discover those laws. However, the question of where these laws come from, and what their precise nature is, remains mysterious.

If laws are all that are needed to explain the origins of the universe, the laws must somehow have existed prior to the universe, that is, eternally. But this raises some puzzling issues. Does it really make sense to think of the law of gravity as existing before the universe existed, before gravity itself existed, before planets, stars, space, and time existed?  Does it make sense to speak of the law of conservation of mass existing before mass existed? For that matter, does it make sense to speak of Mendel’s laws of genetics existing before there was DNA, before there were nucleotides to make up DNA, before there were even atoms of carbon and nitrogen to make up nucleotides? It took the universe 150 million years to 1 billion years to create the first heavy elements, including atoms of carbon and nitrogen. Were Mendel’s laws of genetics sitting around impatiently that whole time waiting for something to happen? Or does it make sense to think of laws evolving with the universe, in which case we still have a chicken-egg question — did evolving laws precede the creation of material forms or did evolving material forms precede the laws?

Furthermore, where do the laws of nature exist? Do they exist in some other-worldly Platonic realm beyond time and space? Many, if not most, mathematicians and physicists are inclined to believe that mathematical equations run the universe, and these equations exist objectively. But if laws/equations govern the operations of the universe, they must exist everywhere, even though we can’t sense them directly at all. Why? Because, according to Einstein, information cannot travel instantaneously across large distances – in fact, information cannot travel faster than the speed of light. Now, the radius of the universe is 46 billion light years, so if we imagine the laws of nature floating around in space at the center of the universe, it would take at least 46 billion years for the commands issued by the laws of nature to reach the edge of the universe — much too slow. Even within our tiny solar system, it takes a little over 8 minutes for light from the sun to reach the earth, so information flow across even that small distance would involve a significant time lag. However, our astronomical observations indicate no lag time — the effect of laws is instantaneous, indicating that the laws must exist everywhere — in other words, laws of nature have the property of omnipresence.

What sort of power do the laws of nature have? Since they direct the operations of the universe, they must have immense power. Either they have the capability to directly shape and move stars, planets, and entire galaxies, or they simply issue commands that stars, planets, and galaxies follow. In either case, should not this power be detectable as a form of energy? And if it is a form of energy, shouldn’t this energy have the potential to be converted into matter, according to the principle of mass-energy equivalence? In that case, the laws of nature should, in principle, be observable as energy or mass. But the laws of nature appear to have no detectable energy and no detectable mass.

Finally, there is the question of the fundamental unity of the laws of nature, and where that unity comes from. A mere collection of unconnected laws does not necessarily bring about order. Laws have to be integrated in a harmonic fashion so that they establish a foundation of order and allow the evolution of increasingly complex forms, from hydrogen atoms to heavier atomic elements to molecules to DNA to complex life forms to intelligent life forms. The fact of the matter is that it does not take much variation in the values of certain physical principles to cause a collapse of the universe or the development of a universe that is incapable of supporting life. According to physicist Paul Davies:

There are endless ways in which the universe might have been totally chaotic. It might have had no laws at all, or merely an incoherent jumble of laws that caused matter to behave in disorderly or unstable ways. . . . the various force of nature are not just a haphazard conjunction of disparate influences. They dovetail together in a mutually supportive way which bestows upon nature  stability and harmony. . .  (The Mind of God: The Scientific Basis for a Rational World, pp. 195-96)

There is a counterargument to this claim of essential unity in the laws of nature: according to theories of the multiverse, new universes are constantly being created with different physical laws and parameters — we just happen to live in a universe that supports life because only a universe that supports life can have observers who speculate about the orderliness of the universe! However, multiverse theories have been widely criticized for being non-falsifiable, since we can’t directly observe other universes.

So, if we are the believe the findings of modern science, the laws of nature have the following characteristics:

  1. They have existed eternally, prior to everything.
  2. They are omnipresent – they exist everywhere.
  3. They are extremely powerful, though they have no energy and no mass.
  4. They are unified and integrated in such a way as to allow the development of complex forms, such as life (at least in this universe, the only universe we can directly observe).

Are these not the characteristics of a universal spirit? Moreover, is not this spirit by definition supernatural, i.e., existing above nature and responsible for the operations of nature?

Please note that I am not arguing here that the laws of nature prove the existence of a personal God who is able to shape, reshape, and interfere with the laws of nature anytime He wishes. I think that modern science has more than adequately demonstrated that the idea of a personal being who listens to our prayers and temporarily suspends or adjusts the laws of nature in response to our prayers or sins is largely incompatible with the evidence we have accumulated over hundreds of years. Earthquakes happen because of shifting tectonic plates, not because certain cities have committed great evils. Disease happens because viruses and bacteria mutate, reproduce, and spread, not because certain people deserve disease. And despite the legend of Moses saving the Jews by parting the Red Sea and then destroying the Pharaoh’s army, God did not send a tsunami to wipe out the Nazis — the armies of the Allied Forces had to do that.

What I am arguing is that if you look closely at what modern science claims about the laws of nature, there is not much that separates these laws from the concept of a universal spirit, even if this spirit is not equivalent to an omnipotent, personal God.

The chief objection to the idea of the laws of nature as a universal spirit is that the laws of nature have the characteristics of mindless regularity and determinism, which are not the characteristics we think of when we think of a spirit. But consider this: the laws of nature do not in fact dictate invariable regularities in all domains, but in fact allow scope for indeterminacy, freedom, and creativity.

Consider activity at the subatomic level. Scientists have studied the behavior of subatomic particles for many decades, and they have discovered laws of behavior for those particles, but the laws are probabilistic, not deterministic. Physicist Richard Feynman, who won a Nobel Prize for his work on the physics of subatomic particles, described the odd world of subatomic behavior as follows: “The electron does whatever it likes.” It travels through space and time in all possible ways, and can even travel backward in time! Feynman was able to offer guidance on how to predict the future location of an electron, but only in terms of a probability based on calculating all the possible paths that the electron could choose.

This freedom on the subatomic level manifests itself in behavior on the atomic level, particularly in the element known as carbon. As Robert Pirsig notes:

One physical characteristic that makes carbon unique is that it is the lightest and most active of the group IV atoms whose chemical bonding characteristics are ambiguous. Usually the positively valanced metals in groups I through III combine chemically with negatively valanced nonmetals in groups V through VII and not with other members of their own group. But the group containing carbon is halfway between the metals and nonmetals, so that sometimes carbon combines with metals and sometimes with nonmetals and sometimes it just sits there and doesn’t combine with anything, and sometimes it combines with itself in long chains and branched trees and rings. . . . this ambiguity of carbon’s bonding preferences was the situation the weak Dynamic subatomic forces needed. Carbon bonding was a balanced mechanism they could take over. It was a vehicle they could steer to all sorts of freedom by selecting first one bonding preference and then another in an almost unlimited variety of ways. . . . Today there are more than two million known compounds of carbon, roughly twenty times as many as all the other known chemical compounds in the world. The chemistry of life is the chemistry of carbon. What distinguishes all the species of plants and animals is, in the final analysis, differences in the way carbon atoms choose to bond. (Lila, p. 168.)

And the life forms constructed by carbon atoms have the most freedom of all — which is why there are few invariable laws in biology that allow predictions as accurate as the predictions of physical systems. A biologist will never be able to predict the motion and destiny of a life form in the same way an astrophysicist can predict the motion of the planets in a solar system.

If you think about the nature of the universal order, regularity and determinism is precisely what is needed on the largest scale (stars, planets, and galaxies), with spontaneity and freedom restricted to the smaller scale of the subatomic/atomic and biological. If stars and planets were as variable and unpredictable as subatomic particles and life forms, there would be no stable solar systems, and no way for life to develop. Regularity and determinism on the large scale provides the stable foundation and firm boundaries needed for freedom, variety, and experimentation on the small scale. In this conception, universal spirit contains the laws of nature, but also has a freedom that goes beyond the laws.

However, it should be noted that there is another view of the laws of nature. In this view, the laws of nature do not have any existence outside of the human mind — they are simply approximate models of the cosmic order that human minds create to understand that order. This view will be discussed in a subsequent post.

What Are the Laws of Nature? – Part Two

Religion as a Source of Evil

That religious individuals and institutions have committed great evils in the past is a fact not disputed by most intelligent persons with a good understanding of history.  What is disputed is the question of how much evil in history religion has actually been responsible for, and how to weigh that evil against the good that religion has done.

A number of contemporary atheist authors such as Sam Harris and Christopher Hitchens focus intensely, even obsessively, on the evils committed by religion.  The message of their books is that not only is religion mostly evil, but that most of the evils committed by human beings historically can be attributed to religion and, more broadly, to a deficiency of reason.  They point to the role of religion in slavery, massacre, torture, ethnic conflict and genocide, racism, and antisemitism.  In response to the argument that secular regimes under the the Nazis and Communists have also been responsible for these same evils, Harris and Hitchens point to the willing collaboration of religious authorities and institutions with the Nazis.  Both authors also argue that secular dictatorships suffered from a deficiency of reason similar to that of religious faith.  A greater commitment to reason and to evidence as the basis for belief, in their view, would do much to end evils committed by both religious and secular movements and regimes.

There is a good deal of truth to these arguments.  The world would be much improved if superstitions and incorrect beliefs about other human beings, ethnic groups, and societies could be eliminated.  But ultimately Harris and Hitchens do not seem to understand, or even take interest in, the deeper causes of evil in human beings.

The problem with viewing evil as being simply an outcome of irrationality is that it overlooks the powerful tendency of reason itself to be a tool of self-interest and self-aggrandizement.  Human beings commit evil not so much because they are irrational, but because they use reason to pursue and justify their desires.  It is the inherent self-centeredness of human beings that is the source of evil, not the belief systems that enable the pursuit and justification of self-interest.  Individual and group desires for wealth, power, influence, fame, prestige, and the fear of defeat and shame — these are the causes of social conflict, violence, and oppression.

Harris and Hitchens point to Biblical sanctions for slavery, and imply that slavery would not have existed if it were not for religion.  But is it not the case that slavery was ultimately rooted in the human desire for a life of wealth and ease, and that one path to such a life in the pre-industrial era was to force others to work for one’s self?  Is it not also the case that human conflicts over land were (and are) rooted in the same desire for wealth, and that violent conflicts over social organization have been rooted in clashing visions over who is to hold power?  Religion has been implicated in slavery and social conflicts, but religion has not been the main cause.

It is worth quoting James Madison on the perennial problem of oppression and violence:

 As long as the reason of man continues [to be] fallible, and he is at liberty to exercise it, different opinions will be formed. As long as the connection subsists between his reason and his self-love, his opinions and his passions will have a reciprocal influence on each other; and the former will be objects to which the latter will attach themselves. . . .

The latent causes of faction are thus sown in the nature of man; and we see them everywhere brought into different degrees of activity, according to the different circumstances of civil society. A zeal for different opinions concerning religion, concerning Government, and many other points, as well of speculation as of practice; an attachment to different leaders ambitiously contending for preëminence and power; or to persons of other descriptions whose fortunes have been interesting to the human passions, have, in turn, divided mankind into parties, inflamed them with mutual animosity, and rendered them much more disposed to vex and oppress each other, than to coöperate for their common good. So strong is this propensity of mankind to fall into mutual animosities, that where no substantial occasion presents itself, the most frivolous and fanciful distinctions have been sufficient to kindle their unfriendly passions, and excite their most violent conflicts.  (Federalist, No. 10)

In Madison’s view, religion was but one source of conflict and oppression, which was ultimately rooted in the problem of differing opinions among humans, arising out of human beings’ inevitable fallibility and self-love.

A number of contemporary science experiments have demonstrated the truth of Madison’s insight.  On contentious issues ranging from global warming to gun control, people of greater intelligence tend to be more passionately divided on these issues than people of lesser intelligence, and more likely to interpret evidence in a way that supports the conclusions of the groups to which they belong.  Higher intelligence did not lead to more accurate conclusions but a greater ability to interpret evidence in a way that supported pre-existing beliefs and group preferences.

Sam Harris himself displays tendencies toward extreme intolerance in his book that would make one leery of the simple claim that an enthusiastic commitment to reason would do much to end violence and oppression.  In his book The End of Faith, Harris declares that “[s]ome propositions are so dangerous that it may even be ethical to kill people for believing them” (pp. 52-53); he calls for imposing a “benign dictatorship” on backward societies as a means of self-defense (pp. 150-51); and he defends the use of torture on both military prisoners and criminal suspects in certain cases  (p. 197).  Harris even writes dreamily of what might have been if “some great kingdom of Reason emerged at the time of the Crusades and pacified the credulous multitudes of Europe and the Middle East.  We might have had modern democracy and the Internet by the year 1600.” (p. 109)  One can just imagine Sam Harris as the leader of this great “kingdom of Reason,” slaughtering, oppressing, and torturing ignorant, superstitious masses, all for the sake of lasting peace and progress.  Of course, there is nothing new about this dream.  It was the dream of the Jacobins and of the Communists as well, which was a nightmare for all who opposed them.

Edmund Burke, a keen observer of the Jacobin mentality, correctly noted why it was mistaken to believe that abolishing religion would do much to eliminate evil in the world:

 History consists, for the greater part, of the miseries brought upon the world by pride, ambition, avarice, revenge, lust, sedition, hypocrisy, ungoverned zeal, and all the train of disorderly appetites, which shake the public with the same

‘troublous storms that toss
The private state, and render life unsweet.’

These vices are the causes of those storms. Religion, morals, laws, prerogatives, privileges, liberties, rights of men, are the pretexts. The pretexts are always found in some specious appearance of a real good. You would not secure men from tyranny and sedition by rooting out of the mind the principles to which these fraudulent pretexts apply? If you did, you would root out everything that is valuable in the human breast. As these are the pretexts, so the ordinary actors and instruments in great public evils are kings, priests, magistrates, senates, parliaments, national assemblies, judges, and captains. You would not cure the evil by resolving that there should be no more monarchs, nor ministers of state, nor of the Gospel,—no interpreters of law, no general officers, no public councils. You might change the names: the things in some shape must remain. A certain quantum of power must always exist in the community, in some hands, and under some appellation. Wise men will apply their remedies to vices, not to names,—to the causes of evil, which are permanent, not to the occasional organs by which they act, and the transitory modes in which they appear. Otherwise you will be wise historically, a fool in practice. (Reflections on the Revolution in France)

Nevertheless, even if we accept Burke’s contention that the evils of religion lie in human nature and not in religion itself, there remains one  question:  shouldn’t we expect more of religion?  If religion doesn’t make people better, and simply reflects human nature, then what good is it?

To that question, I have to say that I honestly do not know.  History offers such a superabundance of both the good and ill effects of religious beliefs and institutions that I cannot fairly weigh the evidence.  In addition, widespread atheism is still a relatively new phenomenon in history, so I find it difficult to judge the long-term effects of atheism.  It is true that atheist regimes have committed many atrocities, but it is also the case that atheism is widespread in modern European democracies, and those countries are free of massacre and oppression, and have lower crime rates than the more religious United States.

Perhaps we should consider the views of one Christian who personally witnessed the catastrophic capitulation of Christian churches to the Nazi regime in the 1930s and decided to become a dissenter to the Nazi regime and the German Christian establishment that supported the Nazis.  Dietrich Bonhoeffer, who was executed by the Nazis in the waning days of World War Two, proposed a newly reformed Christianity that would indeed fulfill the role of making human beings better.  I will critically evaluate Bonhoeffer’s proposal in a future post.

The Role of Emotions in Knowledge

In a previous post, I discussed the idea of objectivity as a method of avoiding subjective error.  When people say that an issue needs to be looked at objectively, or that science is the field of knowledge best known for its objectivity, they are arguing for the need to overcome personal biases and prejudices, and to know things as they really are in themselves, independent of the human mind and perceptions.  However, I argued that truth needs to be understood as a fruitful or proper relationship between subjects and objects, and that it is impossible to know the truth by breaking this relationship.

One way of illustrating the relationship between subjects and objects is by examining the role of human emotions in knowledge.  Emotions are considered subjective, and one might argue that although emotions play a role in the form of knowledge known as the humanities (art, literature, religion), emotions are either unnecessary or an impediment to knowledge in the sciences.  However, a number of studies have demonstrated that feeling plays an important role in cognition, and that the loss of emotions in human beings leads to poor decision-making and an inability to cope effectively with the real world.  Emotionless human beings would in fact make poor scientists.

Professor of Neuroscience Antonio Damasio, in his book Descartes’ Error: Emotion, Reason, and the Human Brain, describes several cases of human beings who lost the part of their brain responsible for emotions, either because of an accident or a brain tumor.  These persons, some of whom were previously known as shrewd and smart businessmen, experienced a serious decline in their competency after damage took place to the emotional center of their brains.  They lost their capacity to make good decisions, to get along with other people, to manage their time, or to plan for the future.  In every other respect, these persons retained their cognitive abilities — their IQs remained above normal and their personality tests resulted in normal scores.  The only thing missing was their capacity to have emotions.  Yet this made a huge difference.  Damasio writes of one subject, “Elliot”:

Consider the beginning of his day: He needed prompting to get started in the morning and prepare to go to work.  Once at work he was unable to manage his time properly; he could not be trusted with a schedule.  When the job called for interrupting an activity and turning to another, he might persist nonetheless, seemingly losing sight of his main goal.  Or he might interrupt the activity he had engaged, to turn to something he found more captivating at that particular moment.  Imagine a task involving reading and classifying documents of a given client.  Elliot would read and fully understand the significance of the material, and he certainly knew how to sort out the documents according to the similarity or disparity of their content.  The problem was that he was likely, all of a sudden, to turn from the sorting task he had initiated to reading one of those papers, carefully and intelligently, and to spend an entire day doing so.  Or he might spend a whole afternoon deliberating on which principle of categorization should be applied: Should it be date, size of document, pertinence to the case, or another?   The flow of work was stopped. (p. 36)

Why did the loss of emotion, which might be expected to improve decision-making by making these persons coldly objective, result in poor decision-making instead?  It might be expected that the loss of emotion would lead to failures in social relationships.  So why were these people unable to even effectively advance their self-interest?  According to Damasio, without emotions, these persons were unable to value, and without value, decision-making became hopelessly capricious or paralyzed, even with normal or above-normal IQs.  Damasio noted, “the cold-bloodedness of Elliot’s reasoning prevented him from assigning different values to different options, and made his decision-making landscape hopelessly flat.” (p. 51)

It is true that emotional swings can lead to very bad decisions — anger, depression, anxiety, even excessive joy — can lead to bad choices.  But the solution to this problem, according to Damasio, is to achieve the right emotional disposition, not to erase the emotions altogether.  One has to find the right balance or harmony of emotions.

Damasio describes one patient who, after suffering damage to the emotional center of his brain, gained one significant advantage: while driving to his appointment on icy roads, he was able to remain calm and drive safely, while other drivers had a tendency to panic when they skidded, leading to accidents.  However, Damasio notes the downside:

I was discussing with the same patient when his next visit to the laboratory should take place.  I suggested two alternative dates, both in the coming month and just a few days apart from each other.  The patient pulled out his appointment book and began consulting the calendar.  The behavior that ensued, which was witnessed by several investigators, was remarkable.  For the better part of a half-hour, the patient enumerated reasons for and against each of the two dates . . . Just as calmly as he had driven over the ice, and recounted that episode, he was now walking us through a tiresome cost-benefit analysis, an endless outlining and fruitless comparison of options and possible consequences.  It took enormous discipline to listen to all of this without pounding on the table and telling him to stop, but we finally did tell him, quietly, that he should come on the second of the alternative dates.  His response was equally calm and prompt.  He simply said, ‘That’s fine.’ (pp. 193-94)

So how would it affect scientific progress if all scientists were like the subjects Damasio studied, free of emotion, and therefore, hypothetically capable of perfect objectivity?  Well it seems likely that science would advance very slowly, at best, or perhaps not at all.  After all, the same tools for effective decision-making in everyday life are needed for the scientific enterprise as well.

As the French mathematician and scientist Henri Poincare noted, every time we look at the world, we encounter an immense mass of unorganized facts.  We don’t have the time to thoroughly examine all those facts and we don’t have the time to pursue experiments on all the hypotheses that may pop into our minds.  We have to use our intuition and best judgment to select the most important facts and develop the best hypotheses (Foundations of Science, pp. 127-30, 390-91).  An emotionless scientist would not only be unable to sustain the social interaction that science requires, he or she would be unable to develop a research plan, manage his or her time, or stick to a research plan.  An ability to perceive value is fundamental to the scientific enterprise, and emotions are needed to properly perceive and act on the right values.

The Role of Imagination in Science, Part 3

In previous posts (here and here), I argued that mathematics was a product of the human imagination, and that the test of mathematical creations was not how real they were but how useful or valuable they were.

Recently, Russian mathematician Edward Frenkel, in an interview in the Economist magazine, argued the contrary case.  According to Frenkel,

[M]athematical concepts and ideas exist objectively, outside of the physical world and outside of the world of consciousness.  We mathematicians discover them and are able to connect to this hidden reality through our consciousness.  If Leo Tolstoy had not lived we would never had known Anna Karenina.  There is no reason to believe that another author would have written that same novel.  However, if Pythagoras had not lived, someone else would have discovered exactly the same Pythagoras theorem.

Dr. Frenkel goes on to note that mathematical concepts don’t always match to physical reality — Euclidean geometry represents an idealized three-dimensional flat space, whereas our actual universe has curved space.  Nevertheless, mathematical concepts must have an objective reality because “these concepts transcend any specific individual.”

One problem with this argument is the implicit assumption that the human imagination is wholly individualistic and arbitrary, and that if multiple people come up with the same idea, this must demonstrate that the idea exists objectively outside the human mind.  I don’t think this assumption is valid.  It’s perfectly possible for the same idea to be invented by multiple people independently.  Surely if Thomas Edison never lived, someone else would have invented the light bulb.   Does that mean that the light bulb is not a true creation of the imagination, that it was not invented but always existed “objectively” before Edison came along and “discovered” it?  I don’t think so.  Likewise with modern modes of ground transportation, air transportation, manufacturing technology, etc.  They’re all apt to be imagined and invented by multiple people working independently; it’s just that laws on copyright and patent only recognize the first person to file.

It’s true that in other fields of human knowledge, such as literature, one is more likely to find creations that are truly unique.  Yes, Anna Karenina is not likely to be written by someone else in the absence of Tolstoy.  However, even in literature, there are themes that are universal; character names and specific plot developments may vary, but many stories are variations on the same theme.  Consider the following story: two characters from different social groups meet and fall in love; the two social groups are antagonistic toward each other and would disapprove of the love; the two lovers meet secretly, but are eventually discovered; one or both lovers die tragically.  Is this not the basic plot of multiple stories, plays, operas, and musicals going back two thousand years?

Dr. Frenkel does admit that not all mathematical concepts correspond to physical reality.  But if there is not a correspondence to something in physical reality, what does it mean to say that a mathematical concept exists objectively?  How do we prove something exists objectively if it is not in physical reality?

If one looks at the history of mathematics, there is an intriguing pattern in which the earliest mathematical symbols do indeed seem to point to or correspond to objects in physical reality; but as time went on and mathematics advanced, mathematical concepts became more and more creative and distant from physical reality.  These later mathematical concepts were controversial among mathematicians at first, but later became widely adopted, not because someone proved they existed, but because the concepts seemed to be useful in solving problems that could not be solved any other way.

The earliest mathematical concepts were the “natural numbers,” the numbers we use for counting (1, 2, 3 . . .).  Simple operations were derived from these natural numbers.  If I have two apples and add three apples, I end up with five apples.  However, the number zero was initially controversial — how can nothing be represented by something?  The ancient Greeks and Romans, for all of their impressive accomplishments, did not use zero, and the number zero was not adopted in Europe until the Middle Ages.

Negative numbers were also controversial at first.  How can one have “negative two apples” or a negative quantity of anything?  However, it became clear that negative numbers were indeed useful conceptually.  If I have zero apples and borrow two apples from a neighbor, according to my mental accounting book, I do indeed have “negative two apples,” because I owe two apples to my neighbor.  It is an accounting fiction, but it is a useful and valuable fiction.  Negative numbers were invented in ancient China and India, but were rejected by Western mathematicians and were not widely accepted in the West until the eighteenth century.

The set of numbers known explicitly as “imaginary numbers” was even more controversial, since it involved a quantity which, when squared, results in a negative number.  Since there is no known number that allows such an operation, the imaginary numbers were initially derided.  However, imaginary numbers proved to be such a useful conceptual tool in solving certain problems, they gradually became accepted.   Imaginary numbers have been used to solve problems in electric current, quantum physics, and envisioning rotations in three dimensions.

Professor Stephen Hawking has used imaginary numbers in his own work on understanding the origins of the universe, employing “imaginary time” in order to explore what it might be like for the universe to be finite in time and yet have no real boundary or “beginning.”  The potential value of such a theory in explaining the origins of the universe leads Professor Hawking to state the following:

This might suggest that the so-called imaginary time is really the real time, and that what we call real time is just a figment of our imaginations.  In real time, the universe has a beginning and an end at singularities that form a boundary to space-time and at which the laws of science break down.  But in imaginary time, there are no singularities or boundaries.  So maybe what we call imaginary time is really more basic, and what we call real is just an idea that we invent to help us describe what we think the universe is like.  But according to the approach I described in Chapter 1, a scientific theory is just a mathematical model we make to describe our observations: it exists only in our minds.  So it is meaningless to ask: which is real, “real” or “imaginary” time?  It is simply a matter of which is the more useful description.  (A Brief History of Time, p. 144.)

If you have trouble understanding this passage, you are not alone.  I have a hard enough time understanding imaginary numbers, let alone imaginary time.  The main point that I wish to underline is that even the best theoretical physicists don’t bother trying to prove that their conceptual tools are objectively real; the only test of a conceptual tool is if it is useful.

As a final example, let us consider one of the most intriguing of imaginary mathematical objects, the “hypercube.”  A hypercube is a cube that extends into additional dimensions, beyond the three spatial dimensions of an ordinary cube.  (Time is usually referred to as the “fourth dimension,” but in this case we are dealing strictly with spatial dimensions.)  A hypercube can be imagined in four dimensions, five dimensions, eight dimensions, twelve dimensions — in fact, there is no limit to the number of dimensions a hypercube can have, though the hypercube gets increasingly complex and eventually impossible to visualize as the number of dimensions increases.

Does a hypercube correspond to anything in physical reality?  Probably not.  While there are theories in physics that posit five, eight, ten, or even twenty-six spatial dimensions, these theories also posit that the additional spatial dimensions beyond our third dimension are curved up in very, very small spaces.  How small?  A million million million million millionth of an inch, according to Stephen Hawking (A Brief History of Time, p. 179).  So as a practical matter, hypercubes could exist only on the most minute scale.  And that’s probably a good thing, as Stephen Hawking points out, because in a universe with four fully-sized spatial dimensions, gravitational forces would become so sensitive to minor disturbances that planetary systems, stars, and even atoms would fly apart or collapse (pp. 180-81).

Dr. Frenkel would admit that hypercubes may not correspond to anything in physical reality.  So how do hypercubes exist?  Note that there is no limit to how many dimensions a hypercube can have.  Does it make sense to say that the hypercube consisting of exactly 32,458 dimensions exists objectively out there somewhere, waiting for someone to discover it?   Or does it make more sense to argue that the hypercube is an invention of the human imagination, and can have as many dimensions as can be imagined?  I’m inclined to the latter view.

Many scientists insist that mathematical objects must exist out there somewhere because they’ve been taught that a good scientist must be objective and dedicate him or herself to the discovery of things that exist independently of the human mind.  But there’re too many mathematical ideas that are clearly products of the human mind, and they’re too useful to abandon merely because they are products of the mind.

Miracles

The Oxford English Dictionary defines a “miracle” as “a marvelous event occurring within human experience, which cannot have been brought about by any human power or by the operation of any natural agency, and must therefore be ascribed to the special intervention of the Deity or some supernatural being.”  (OED, 1989)  This meaning reflects how the word “miracle” has been commonly used in the English language for hundreds of years.

Since a miracle, by definition, involves a suspension of physical laws in nature by some supernatural entity, the question of whether miracles take place, or have ever taken place, is an important one.  Most adherents of religion — any religion — are inclined to believe in miracles; skeptics argue that there is no evidence to support the existence of miracles.

I believe skeptics are correct that the evidence for a supernatural agency occasionally suspending the normal processes and laws of nature is very weak or nonexistent.  Scientists have been studying nature for hundreds of years; when an observed event does not appear to follow physical laws, it usually turns out that the law is imperfectly understood and needs to be modified, or there is some other physical law that needs to be taken into account.  Scientists have not found evidence of a supernatural being behind observational anomalies.  This is not to say that everything in the universe is deterministic and can be reduced to physical laws.  Most scientists agree that there is room for indeterminacy in the universe, with elements of freedom and chance.  But this indeterminacy does not seem to correspond to what people have claimed as miracles.

However, I would like to make the case that the way we think about miracles is all wrong, that our current conception of what counts as a miracle is based on a mistaken prejudice in favor of events that we are unaccustomed to.

According to the Oxford English Dictionary, the word “miracle” is derived from the Latin word “miraculum,” which is an “object of wonder.” (OED 1989)  A Latin dictionary similarly defines “miraculum” as “a wonderful, strange, or marvelous thing, a wonder, marvel, miracle.” (Charlton T. Lewis, A Latin Dictionary, 1958)  There is nothing in the original Latin conception of miraculum that requires a belief in the suspension of physical laws.  Miraculum is simply about wonder.

Wonder as an activity is an intellectual exercise, but it is also an emotional disposition.  We wonder about the improbable nature of our existence, we wonder about the vastness of the universe, we wonder about the enormous complexity and diversity of life.  From wonder often comes other emotional dispositions: astonishment, puzzlement, joy, and gratitude.

The problem is that in our humdrum, everyday lives, it is easy to lose wonder.  We become accustomed to existence through repeated exposure to the same events happening over and over, and we no longer wonder.  The satirical newspaper The Onion expresses this disposition well: “Miracle Of Birth Occurs For 83 Billionth Time,” reads one headline.

Is it really the case, though, that a wondrous event ceases to be wondrous because it occurs frequently, regularly, and appears to be guided by causal laws?  The birth of a human being begins with blueprints provided by an egg cell and sperm cell; over the course of nine months, over 100,000,000,000,000,000,000,000,000 atoms of oxygen, carbon, hydrogen, nitrogen and other elements gradually come together in the right place at the right time to form the extremely intricate arrangement known as a human being.  If anything is a miraculum, or wonder, it is this event.  But because it happens so often, we stop noticing.  Stories about crying statues, or people seeing the heart of Jesus in a communion wafer, or the face of Jesus in a sock get our attention and are hailed as miracles because these alleged events are unusual.  But if you think about it, these so-called miracles are pretty insignificant in comparison to human birth.  And if crying statues were a frequent event, people would gradually become accustomed to it; after a while, they would stop caring, and start looking around for something new to wonder about it.

What a paradox.  We are surrounded by genuine miracles every day, but we don’t notice them.  So we grasp at the most trivial coincidences and hoaxes in order to restore our sense of wonder, when what we should be doing is not taking so many wonders for granted.

The Role of Imagination in Science, Part 2

In a previous posting, we examined the status of mathematical objects as creations of the human mind, not objectively existing entities.  We also discussed the fact that the science of geometry has expanded from a single system to a great many systems, with no single system being true.  So what prevents mathematics from falling into nihilism?

Many people seem to assume that if something is labeled as “imaginary,” it is essentially arbitrary or of no consequence, because it is not real.  If something is a “figment of imagination” or “exists only in your mind,” then it is of no value to scientific knowledge.  However, two considerations impose limits or restrictions on imagination that prevent descent into nihilism.

The first consideration is that even imaginary objects have properties that are real or unavoidable, once they are proposed.  In The Mathematical Experience, mathematics professors Philip J. Davis and Reuben Hersh argue that mathematics is the study of “true facts about imaginary objects.”  This may be a difficult concept to grasp (it took me a long time to grasp it), but consider some simple examples:

Imagine a circle in your mind.  Got that?  Now imagine a circle in which the radius of the circle is greater than the circumference of the circle.  If you are imagining correctly, it can’t be done.  Whether or not you know that the circumference of a circle is equal to twice the radius times pi, you should know that the circumference of a circle is always going to be larger than the radius.

Now imagine a right triangle.  Can you imagine a right triangle with a hypotenuse that is shorter than either of the two other sides?  No, whether or not you know the Pythagorean theorem, it’s in the very nature of a right triangle to have a hypotenuse that is longer than either of the two remaining sides.  This is what we mean by “true facts about imaginary objects.”  Once you specify an imagined object with certain basic properties, other properties follow inevitably from those initial, basic properties.

The second consideration that puts restrictions on the imagination is this: while it may be possible to invent an infinite number of mathematical objects, only a limited number of those objects is going to be of value.  What makes a mathematical object of value?  In fact, there are multiple criteria for valuing mathematical objects, some of which may conflict with each other.

The most important criterion of mathematical objects according to scientists is the ability to predict real-world phenomena.  Does a particular equation or model allow us to predict the motion of stars and planets; or the multiplication of life forms; or the growth of a national economy?  This ability to predict is a most powerful attribute of mathematics — without it, it is not likely that scientists would bother using mathematics at all.

Does the ability to predict real-world phenomena demonstrate that at least some mathematical objects, however imaginary, at least correspond to or model reality?  Yes — and no.  For in most cases it is possible to choose from a number of different mathematical models that are approximately equal in their ability to predict, and we are still compelled to refer to other criteria in choosing which mathematical object to use.  In fact, there are often tradeoffs when evaluating various criteria — often, so single mathematical object is best on all criteria.

One of the most important criteria after predictive ability is simplicity.  Although it has been demonstrated that Euclidean geometry is not the only type of geometry, it is still widely used because it is the simplest.  In general, scientists like to begin with the simplest model first; if that model becomes inadequate in predicting real-world events, they modify the model or choose a new one.  There is no point in starting with an unnecessarily complex geometry, and when one’s model gets too complex, the chance of error increases significantly.  In fact, simplicity is regarded as an important aspect of mathematical beauty — a mathematical proof that is excessively long and complicated is considered ugly, while a simple proof that provides answers with few steps is beautiful.

Another criterion for choosing one mathematical object over another is scope or comprehensiveness.  Does the mathematical object apply only in limited, specific circumstances?  Or does it apply broadly to phenomena, tying together multiple events under a single model?

There is also the criterion of fruitfulness.  Is the model going to provide many new research findings?  Or is it going to be limited to answering one or two questions, providing no basis for additional progress?

Ultimately, it’s impossible to get away from value judgments when evaluating mathematical objects.  Correspondence to reality cannot be the only value.  Why do we use the Hindu-Arabic numeral system today and not the Roman numeral system?  I don’t think it makes sense to say that the Hindu-Arabic system corresponds to reality more accurately than the Roman numeral system.  Rather, the Hindu-Arabic numeral system is easier to use for many calculations, and it is more powerful in obtaining useful results.  Likewise a base 10 numeral system doesn’t correspond to reality more accurately than a base 2 numeral system — it’s just easier for humans to use a base 10 system.  For computers, it is easier to use a base 2 system.  A base 60 system, such as the ancient Babylonians used, is more difficult for many calculations than a base 10, but it is more useful in measuring time and angles.  Why?  Because 60 has so many divisors (1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60) it can express fractions of units more simply, which is why we continue to use a modified version of base 60 for measuring time and angles (and geographic coordinates) to this day.

What about mathematical objects that don’t predict real world events or appear to model anything in reality at all?  This is the realm of pure mathematics, and some mathematicians prefer this realm to the realm of applied mathematics.  Do we make fun of pure mathematicians for wasting time on purely imaginary objects?  No, pure mathematics is still a form of knowledge, and mathematicians still seek beauty in mathematics.

Ultimately, imaginative knowledge is not arbitrary or inconsequential; there are real limits even for the imagination.  There may be an infinite number of mathematical systems that can be imagined, but only a limited number will be good.  Likewise, there is an infinite variety of musical compositions, paintings, and novels that can be created by the imagination, but only a limited number will be good, and only a very small number will be truly superb.  So even the imagination has standards, and these standards apply as much to the sciences as to the arts.

The Role of Imagination in Science, Part 1

In Zen and the Art of Motorcycle Maintenance, author Robert Pirsig argues that the basic conceptual tools of science, such as the number system, the laws of physics, and the rules of logic, have no objective existence, but exist in the human mind.  These conceptual tools were not “discovered” but created by the human imagination.  Nevertheless we use these concepts and invent new ones because they are good — they help us to understand and cope with our environment.

As an example, Pirsig points to the uncertain status of the number “zero” in the history of western culture.  The ancient Greeks were divided on the question of whether zero was an actual number – how could nothing be represented by something? – and did not widely employ zero.  The Romans’ numerical system also excluded zero.  It was only in the Middle Ages that the West finally adopted the number zero by accepting the Hindu-Arabic numeral system.  The ancient Greek and Roman civilizations did not neglect zero because they were blind or stupid.  If future generations adopted the use of zero, it was not because they suddenly discovered that zero existed, but because they found the number zero useful.

In fact, while mathematics appears to be absolutely essential to progress in the sciences, mathematics itself continues to lack objective certitude, and the philosophy of mathematics is plagued by questions of foundations that have never been resolved.  If asked, the majority of mathematicians will argue that mathematical objects are real, that they exist in some unspecified eternal realm awaiting discovery by mathematicians; but if you follow up by asking how we know that this realm exists, how we can prove that mathematical objects exist as objective entities, mathematicians cannot provide an answer that is convincing even to their fellow mathematicians.  For many decades, according to mathematicians Philip J. Davis and Reuben Hersh, the brightest minds sought to provide a firm foundation for mathematical truth, only to see their efforts founder (“Foundations , Found and Lost,” in The Mathematical Experience).

In response to these failures, mathematicians divided into multiple camps.  While the majority of mathematicians still insisted that mathematical objects were real, the school of fictionalism claimed that all mathematical objects were fictional.  Nevertheless, the fictionalists argued that mathematics was a useful fiction, so it was worthwhile to continue studying mathematics.  In the school of formalism, mathematics is described as a set of statements of the consequences of following certain rules of the game — one can create many “games,” and these games have different outcomes resulting from different sets of rules, but the games may not be about anything real.  The school of finitism argues that only the natural numbers (i.e., numbers for counting, such as 1, 2, 3. . . ) and numbers that can be derived from the natural numbers are real, all other numbers are creations of the human mind.  Even if one dismisses these schools as being only a minority, the fact that there is such stark disagreement among mathematicians about the foundations of mathematics is unsettling.

Ironically, as mathematical knowledge has increased over the years, so has uncertainty.  For many centuries, it was widely believed that Euclidean geometry was the most certain of all the sciences.  However, by the late nineteenth century, it was discovered that one could create different geometries that were just as valid as Euclidean geometry — in fact, it was possible to create an infinite number of valid geometries.  Instead of converging on a single, true geometry, mathematicians have seemingly gone into all different directions.  So what prevents mathematics from falling into complete nihilism, in which every method is valid and there are no standards?  This is an issue we will address in a subsequent posting.