How Powerful is God?

In a previous post, we discussed the non-omnipotent God of process theology as a possible explanation for the twin facts that the universe appears to be fine-tuned for life and yet evolution is extremely slow and life precarious.  The problem with process theology, however, is that God appears to be extremely weak.  Is the concept of a non-omnipotent God worthwhile?

One response to this criticism is that portraying God as weak simply because the universe was not instantaneously and perfectly constructed for life is to misconstrue what the meaning of “weak” is.  The mere fact that the universe, consisting of a least 10,000,000,000,000,000,000,000 stars, was created out of nothingness and has lasted over 13 billion years does not seem to indicate weakness.

Another response would be that the very gradual incrementalism of evolution may be a necessary component of a fantastically complex system that cannot tolerate errors that would threaten to destroy the system.  That is, the various physical laws and numerical constants that underlie the order of the universe exist in such an intricate relationship that a violation of a law in one particular case or sudden change in one of the constants would cause the universe to self-destruct, in the same way that a computer program may crash if a single line of code is incorrect or is incompatible with the other lines of code.

In fact, a number of physicists have explicitly described the universe as a type of computer, in the sense that the order of the universe is based on the processing of information in the form of the physical laws and constants.  Of course, the chief difference between the universe and a computer is that we can live with a computer crashing occasionally — we cannot live with the universe crashing even once.  Thus the fact that the universe, while not immortal, never seems to crash, indicates that gradual evolution may be necessary.  Perhaps instability on the micro level of the universe (an asteroid occasionally crashing into a planet with life) is the price to be paid for stability on the macro level.

Alternatively, we can conceptualize the order behind the universe as a type of mind, “mind” being defined broadly as any system for processing information.  We can posit three types of mind in the historical development of the universe: cosmic mind (God), biological mind (human/animal mind), and electronic mind (computer).

Cosmic mind can be thought of as pure spirit, or pure information, if you will.  Cosmic mind can create matter and a stable foundation for the universe, but once matter is created, the influence of spirit on matter is relatively weak.  That is, there is a division between the world of spirit and the world of matter that is difficult to bridge.  Biological mind does not know everything cosmic mind does and it is limited in time and space, but biological mind can more efficiently act on matter, since it is part of the world of matter.  Electronic mind (computer) is a creation of biological mind but processes larger amounts of information more quickly, assisting biological mind in the manipulation of matter.

As a result, the evolution of the universe began very slowly, but has recently accelerated as a result of incremental improvements to mind.  According to Stephen Hawking,

The process of biological evolution was very slow at first. It took two and a half billion years, to evolve from the earliest cells to multi-cell animals, and another billion years to evolve through fish and reptiles, to mammals. But then evolution seemed to have speeded up. It only took about a hundred million years, to develop from the early mammals to us. . . . [W]ith the human race, evolution reached a critical stage, comparable in importance with the development of DNA. This was the development of language, and particularly written language. It meant that information can be passed on, from generation to generation, other than genetically, through DNA. . . .  [W]e are now entering a new phase, of what might be called, self designed evolution, in which we will be able to change and improve our DNA. . . . If this race manages to redesign itself, to reduce or eliminate the risk of self-destruction, it will probably spread out, and colonise other planets and stars.  (“Life in the Universe“)

According to physicist Freeman Dyson (Disturbing the Universe), even if interstellar spacecraft achieve only one percent of the speed of light, a speed within the possibility of present-day technology, the Milky Way galaxy could be colonized end-to-end in ten million years –  a very long time from an individual human’s perspective, but a remarkably short time in the history of evolution, considering it took 2.5 billion years simply to make the transition from single-celled life forms to multi-celled creatures.

So cosmic mind can be very powerful in the long run, but patience is required!

A Universe Half Full?

It has often been said that the difference between a pessimist and an optimist is that a pessimist sees a half-poured beverage as a glass half empty, whereas an optimist sees the glass as being half full.  I think the decision to adopt or reject atheism may originate from such a perspective — that is, atheists see the universe as half empty, whereas believers see the universe as half full.  We all go through life experiencing events both good and bad, moments of joy, beauty, and wonder, along with moments of despair, ugliness, and boredom.  When we experience the positive, we may be inclined to attribute purpose and benevolence to the universal order; when we experience the negative, we may be more apt to attribute disorder and meaninglessness to the universe.

So, is it all a matter of perspective?  If we are serious thinkers, we have to reject the conclusion that it is merely a matter of perspective.  Either there is a God or there isn’t.  If we are going to explain the universe, we have to explain everything, good and bad, and not neglect facts that don’t fit.

The case for atheism is fairly straightforward: the facts of science indicate a universe that is not very hospitable to either the emergence of life or the protection of life, which greatly undercuts the case for an intelligent designer.  Most planets have no life, except perhaps for the most primitive, insignificant forms of life.  Where life does exist, life is precarious and cruel; on a daily basis, life forms are attacked and destroyed by hostile physical forces and other life forms.  There is not the slightest historical and archeological evidence of a “golden age” or a “Garden of Eden” which once existed but was lost because of man’s sinfulness; life has always been precarious and cruel.  Even where life has developed, it has developed in a process of very gradual evolution, consisting of much randomness, over the course of billions of years.  And even despite progress after billions of years, life on earth has been subject to occasional mass extinction events, from an asteroid or comet striking the planet, to volcanic eruptions, to dramatic climate change.  Even if one granted that God created life very gradually, the notion that God would allow a dumb rock from space to wipe out the accomplishments of several billions of years of evolution seems inexplicable.

The case for belief in God rests on a contrary claim, namely that order in the universe is too complex and unusual to be explained merely by reference to purposeless physical laws and random events.  It may appear that physical laws operate without apparent purpose, such as when an asteroid causes mass extinction, and evolution certainly consists of many random events.  But there is too much order to subscribe to the view that the universe is nothing but blind laws and random events.  When one studies the development of the stars and planets and their predictable motions, the vast diversity and complexity of life on earth, and the amount of information contained in a single DNA molecule, randomness is not the first thing one thinks of.  Total randomness implies total disorder and a total lack of pattern, but the randomness we see in the universe takes place within a certain structure.  If you roll a die, there are six possible outcomes; if you flip a coin there are two possible outcomes.  Both actions are random, but a structure of order determines the range of possible outcomes.  Likewise, there is randomness and disorder in the universe, but there is a larger structure of order that provides general stability and restricts outcomes.  Mutations take place in life forms, but these mutations are limited and incremental, restricting the range of possible outcomes and allowing the development of new forms of life on top of old forms of life.

Physicists tend to agree that we appear to live in a universe “fine-tuned” for life, in the sense that many physical constants can only exist with certain values, or life would not be able to evolve.  According to Stephen Hawking, “The laws of science, as we know them at present, contain many fundamental numbers, like the size of the electric charge of the electron and the ratio of the masses of the proton and electron. . . . The remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life.”  Physicist Paul Davies writes:

 [L]ife as we know it depends very sensitively on the form of the laws of physics, and on some seemingly fortuitous accidents in the actual values that nature has chosen for various particle masses, force strengths, and so on. . . . [I]f we could play God, and select values for these quantities at whim by twiddling a set of knobs, we would find that almost all knob settings would render the universe uninhabitable.  In some cases it seems as if the different knobs have to be fine-tuned to enormous precision if the universe is to be such that life will flourish. (The Mind of God, pp. 199-200).

The counterargument to the “fine-tuned” argument is that there could exist many universes that self-destruct in a short period of time or don’t have life — we just happen to live in a fine-tuned universe because only a fine-tuned universe can allow the existence of life forms that think about how fine-tuned the universe is!  However, this argument rests on the hypothetical belief that many alternative universes have existed or do exist, and until there is evidence for other universes, it must remain highly speculative.

So how do we reconcile the two sets of facts presented by the atheists and the believers?  On the one hand, the universe appears to allow life to develop only extremely gradually under often hostile conditions, with many setbacks along the way.  On the other hand, the universe appears to be fine-tuned to support life, suggesting some sort of cosmic purpose or intelligence.

In my view, the only way to reconcile the two sets of facts is to conceive of God as being very powerful, but not omnipotent.  (See a previous posting on this subject.)  According to process theology, God’s power is not coercive but persuasive, and God acts over long periods of time to create.  Existing things are not subject to total central control, but God can influence outcomes.

An analogy could be made with the human mind and its control over the body.  It is easy to raise one’s right arm by using one’s thoughts, but to pitch a fastball, play a piano, or make a high-quality sculpture requires a level of coordination and skill that most of us do not have — as well as an extraordinary amount of training and practice.  In the course of life, we attempt many things, but are never successful at all we attempt; in fact, the ambitions in our minds usually outpace our physical abilities.  Some people do not even have the ability to raise their right arm.  The relation of a cosmic mind to the “body” of the universe may be similar in principle.

Some would object that the God of process theology is ridiculously weak.  A God that has only the slightest influence over matter and cannot even stop an asteroid from hitting a planet does not seem like a God worth worshiping or even respecting.  In fact, why do we even need the concept of a weak God — wouldn’t we be better off without it?  I will address this topic in a future posting.

The Role of Imagination in Science, Part 2

In a previous posting, we examined the status of mathematical objects as creations of the human mind, not objectively existing entities.  We also discussed the fact that the science of geometry has expanded from a single system to a great many systems, with no single system being true.  So what prevents mathematics from falling into nihilism?

Many people seem to assume that if something is labeled as “imaginary,” it is essentially arbitrary or of no consequence, because it is not real.  If something is a “figment of imagination” or “exists only in your mind,” then it is of no value to scientific knowledge.  However, two considerations impose limits or restrictions on imagination that prevent descent into nihilism.

The first consideration is that even imaginary objects have properties that are real or unavoidable, once they are proposed.  In The Mathematical Experience, mathematics professors Philip J. Davis and Reuben Hersh argue that mathematics is the study of “true facts about imaginary objects.”  This may be a difficult concept to grasp (it took me a long time to grasp it), but consider some simple examples:

Imagine a circle in your mind.  Got that?  Now imagine a circle in which the radius of the circle is greater than the circumference of the circle.  If you are imagining correctly, it can’t be done.  Whether or not you know that the circumference of a circle is equal to twice the radius times pi, you should know that the circumference of a circle is always going to be larger than the radius.

Now imagine a right triangle.  Can you imagine a right triangle with a hypotenuse that is shorter than either of the two other sides?  No, whether or not you know the Pythagorean theorem, it’s in the very nature of a right triangle to have a hypotenuse that is longer than either of the two remaining sides.  This is what we mean by “true facts about imaginary objects.”  Once you specify an imagined object with certain basic properties, other properties follow inevitably from those initial, basic properties.

The second consideration that puts restrictions on the imagination is this: while it may be possible to invent an infinite number of mathematical objects, only a limited number of those objects is going to be of value.  What makes a mathematical object of value?  In fact, there are multiple criteria for valuing mathematical objects, some of which may conflict with each other.

The most important criterion of mathematical objects according to scientists is the ability to predict real-world phenomena.  Does a particular equation or model allow us to predict the motion of stars and planets; or the multiplication of life forms; or the growth of a national economy?  This ability to predict is a most powerful attribute of mathematics — without it, it is not likely that scientists would bother using mathematics at all.

Does the ability to predict real-world phenomena demonstrate that at least some mathematical objects, however imaginary, at least correspond to or model reality?  Yes — and no.  For in most cases it is possible to choose from a number of different mathematical models that are approximately equal in their ability to predict, and we are still compelled to refer to other criteria in choosing which mathematical object to use.  In fact, there are often tradeoffs when evaluating various criteria — often, so single mathematical object is best on all criteria.

One of the most important criteria after predictive ability is simplicity.  Although it has been demonstrated that Euclidean geometry is not the only type of geometry, it is still widely used because it is the simplest.  In general, scientists like to begin with the simplest model first; if that model becomes inadequate in predicting real-world events, they modify the model or choose a new one.  There is no point in starting with an unnecessarily complex geometry, and when one’s model gets too complex, the chance of error increases significantly.  In fact, simplicity is regarded as an important aspect of mathematical beauty — a mathematical proof that is excessively long and complicated is considered ugly, while a simple proof that provides answers with few steps is beautiful.

Another criterion for choosing one mathematical object over another is scope or comprehensiveness.  Does the mathematical object apply only in limited, specific circumstances?  Or does it apply broadly to phenomena, tying together multiple events under a single model?

There is also the criterion of fruitfulness.  Is the model going to provide many new research findings?  Or is it going to be limited to answering one or two questions, providing no basis for additional progress?

Ultimately, it’s impossible to get away from value judgments when evaluating mathematical objects.  Correspondence to reality cannot be the only value.  Why do we use the Hindu-Arabic numeral system today and not the Roman numeral system?  I don’t think it makes sense to say that the Hindu-Arabic system corresponds to reality more accurately than the Roman numeral system.  Rather, the Hindu-Arabic numeral system is easier to use for many calculations, and it is more powerful in obtaining useful results.  Likewise a base 10 numeral system doesn’t correspond to reality more accurately than a base 2 numeral system — it’s just easier for humans to use a base 10 system.  For computers, it is easier to use a base 2 system.  A base 60 system, such as the ancient Babylonians used, is more difficult for many calculations than a base 10, but it is more useful in measuring time and angles.  Why?  Because 60 has so many divisors (1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60) it can express fractions of units more simply, which is why we continue to use a modified version of base 60 for measuring time and angles (and geographic coordinates) to this day.

What about mathematical objects that don’t predict real world events or appear to model anything in reality at all?  This is the realm of pure mathematics, and some mathematicians prefer this realm to the realm of applied mathematics.  Do we make fun of pure mathematicians for wasting time on purely imaginary objects?  No, pure mathematics is still a form of knowledge, and mathematicians still seek beauty in mathematics.

Ultimately, imaginative knowledge is not arbitrary or inconsequential; there are real limits even for the imagination.  There may be an infinite number of mathematical systems that can be imagined, but only a limited number will be good.  Likewise, there is an infinite variety of musical compositions, paintings, and novels that can be created by the imagination, but only a limited number will be good, and only a very small number will be truly superb.  So even the imagination has standards, and these standards apply as much to the sciences as to the arts.

The Role of Imagination in Science, Part 1

In Zen and the Art of Motorcycle Maintenance, author Robert Pirsig argues that the basic conceptual tools of science, such as the number system, the laws of physics, and the rules of logic, have no objective existence, but exist in the human mind.  These conceptual tools were not “discovered” but created by the human imagination.  Nevertheless we use these concepts and invent new ones because they are good — they help us to understand and cope with our environment.

As an example, Pirsig points to the uncertain status of the number “zero” in the history of western culture.  The ancient Greeks were divided on the question of whether zero was an actual number – how could nothing be represented by something? – and did not widely employ zero.  The Romans’ numerical system also excluded zero.  It was only in the Middle Ages that the West finally adopted the number zero by accepting the Hindu-Arabic numeral system.  The ancient Greek and Roman civilizations did not neglect zero because they were blind or stupid.  If future generations adopted the use of zero, it was not because they suddenly discovered that zero existed, but because they found the number zero useful.

In fact, while mathematics appears to be absolutely essential to progress in the sciences, mathematics itself continues to lack objective certitude, and the philosophy of mathematics is plagued by questions of foundations that have never been resolved.  If asked, the majority of mathematicians will argue that mathematical objects are real, that they exist in some unspecified eternal realm awaiting discovery by mathematicians; but if you follow up by asking how we know that this realm exists, how we can prove that mathematical objects exist as objective entities, mathematicians cannot provide an answer that is convincing even to their fellow mathematicians.  For many decades, according to mathematicians Philip J. Davis and Reuben Hersh, the brightest minds sought to provide a firm foundation for mathematical truth, only to see their efforts founder (“Foundations , Found and Lost,” in The Mathematical Experience).

In response to these failures, mathematicians divided into multiple camps.  While the majority of mathematicians still insisted that mathematical objects were real, the school of fictionalism claimed that all mathematical objects were fictional.  Nevertheless, the fictionalists argued that mathematics was a useful fiction, so it was worthwhile to continue studying mathematics.  In the school of formalism, mathematics is described as a set of statements of the consequences of following certain rules of the game — one can create many “games,” and these games have different outcomes resulting from different sets of rules, but the games may not be about anything real.  The school of finitism argues that only the natural numbers (i.e., numbers for counting, such as 1, 2, 3. . . ) and numbers that can be derived from the natural numbers are real, all other numbers are creations of the human mind.  Even if one dismisses these schools as being only a minority, the fact that there is such stark disagreement among mathematicians about the foundations of mathematics is unsettling.

Ironically, as mathematical knowledge has increased over the years, so has uncertainty.  For many centuries, it was widely believed that Euclidean geometry was the most certain of all the sciences.  However, by the late nineteenth century, it was discovered that one could create different geometries that were just as valid as Euclidean geometry — in fact, it was possible to create an infinite number of valid geometries.  Instead of converging on a single, true geometry, mathematicians have seemingly gone into all different directions.  So what prevents mathematics from falling into complete nihilism, in which every method is valid and there are no standards?  This is an issue we will address in a subsequent posting.