The Role of Emotions in Knowledge

In a previous post, I discussed the idea of objectivity as a method of avoiding subjective error.  When people say that an issue needs to be looked at objectively, or that science is the field of knowledge best known for its objectivity, they are arguing for the need to overcome personal biases and prejudices, and to know things as they really are in themselves, independent of the human mind and perceptions.  However, I argued that truth needs to be understood as a fruitful or proper relationship between subjects and objects, and that it is impossible to know the truth by breaking this relationship.

One way of illustrating the relationship between subjects and objects is by examining the role of human emotions in knowledge.  Emotions are considered subjective, and one might argue that although emotions play a role in the form of knowledge known as the humanities (art, literature, religion), emotions are either unnecessary or an impediment to knowledge in the sciences.  However, a number of studies have demonstrated that feeling plays an important role in cognition, and that the loss of emotions in human beings leads to poor decision-making and an inability to cope effectively with the real world.  Emotionless human beings would in fact make poor scientists.

Professor of Neuroscience Antonio Damasio, in his book Descartes’ Error: Emotion, Reason, and the Human Brain, describes several cases of human beings who lost the part of their brain responsible for emotions, either because of an accident or a brain tumor.  These persons, some of whom were previously known as shrewd and smart businessmen, experienced a serious decline in their competency after damage took place to the emotional center of their brains.  They lost their capacity to make good decisions, to get along with other people, to manage their time, or to plan for the future.  In every other respect, these persons retained their cognitive abilities — their IQs remained above normal and their personality tests resulted in normal scores.  The only thing missing was their capacity to have emotions.  Yet this made a huge difference.  Damasio writes of one subject, “Elliot”:

Consider the beginning of his day: He needed prompting to get started in the morning and prepare to go to work.  Once at work he was unable to manage his time properly; he could not be trusted with a schedule.  When the job called for interrupting an activity and turning to another, he might persist nonetheless, seemingly losing sight of his main goal.  Or he might interrupt the activity he had engaged, to turn to something he found more captivating at that particular moment.  Imagine a task involving reading and classifying documents of a given client.  Elliot would read and fully understand the significance of the material, and he certainly knew how to sort out the documents according to the similarity or disparity of their content.  The problem was that he was likely, all of a sudden, to turn from the sorting task he had initiated to reading one of those papers, carefully and intelligently, and to spend an entire day doing so.  Or he might spend a whole afternoon deliberating on which principle of categorization should be applied: Should it be date, size of document, pertinence to the case, or another?   The flow of work was stopped. (p. 36)

Why did the loss of emotion, which might be expected to improve decision-making by making these persons coldly objective, result in poor decision-making instead?  It might be expected that the loss of emotion would lead to failures in social relationships.  So why were these people unable to even effectively advance their self-interest?  According to Damasio, without emotions, these persons were unable to value, and without value, decision-making became hopelessly capricious or paralyzed, even with normal or above-normal IQs.  Damasio noted, “the cold-bloodedness of Elliot’s reasoning prevented him from assigning different values to different options, and made his decision-making landscape hopelessly flat.” (p. 51)

It is true that emotional swings can lead to very bad decisions — anger, depression, anxiety, even excessive joy — can lead to bad choices.  But the solution to this problem, according to Damasio, is to achieve the right emotional disposition, not to erase the emotions altogether.  One has to find the right balance or harmony of emotions.

Damasio describes one patient who, after suffering damage to the emotional center of his brain, gained one significant advantage: while driving to his appointment on icy roads, he was able to remain calm and drive safely, while other drivers had a tendency to panic when they skidded, leading to accidents.  However, Damasio notes the downside:

I was discussing with the same patient when his next visit to the laboratory should take place.  I suggested two alternative dates, both in the coming month and just a few days apart from each other.  The patient pulled out his appointment book and began consulting the calendar.  The behavior that ensued, which was witnessed by several investigators, was remarkable.  For the better part of a half-hour, the patient enumerated reasons for and against each of the two dates . . . Just as calmly as he had driven over the ice, and recounted that episode, he was now walking us through a tiresome cost-benefit analysis, an endless outlining and fruitless comparison of options and possible consequences.  It took enormous discipline to listen to all of this without pounding on the table and telling him to stop, but we finally did tell him, quietly, that he should come on the second of the alternative dates.  His response was equally calm and prompt.  He simply said, ‘That’s fine.’ (pp. 193-94)

So how would it affect scientific progress if all scientists were like the subjects Damasio studied, free of emotion, and therefore, hypothetically capable of perfect objectivity?  Well it seems likely that science would advance very slowly, at best, or perhaps not at all.  After all, the same tools for effective decision-making in everyday life are needed for the scientific enterprise as well.

As the French mathematician and scientist Henri Poincare noted, every time we look at the world, we encounter an immense mass of unorganized facts.  We don’t have the time to thoroughly examine all those facts and we don’t have the time to pursue experiments on all the hypotheses that may pop into our minds.  We have to use our intuition and best judgment to select the most important facts and develop the best hypotheses (Foundations of Science, pp. 127-30, 390-91).  An emotionless scientist would not only be unable to sustain the social interaction that science requires, he or she would be unable to develop a research plan, manage his or her time, or stick to a research plan.  An ability to perceive value is fundamental to the scientific enterprise, and emotions are needed to properly perceive and act on the right values.

The Role of Imagination in Science, Part 3

In previous posts (here and here), I argued that mathematics was a product of the human imagination, and that the test of mathematical creations was not how real they were but how useful or valuable they were.

Recently, Russian mathematician Edward Frenkel, in an interview in the Economist magazine, argued the contrary case.  According to Frenkel,

[M]athematical concepts and ideas exist objectively, outside of the physical world and outside of the world of consciousness.  We mathematicians discover them and are able to connect to this hidden reality through our consciousness.  If Leo Tolstoy had not lived we would never had known Anna Karenina.  There is no reason to believe that another author would have written that same novel.  However, if Pythagoras had not lived, someone else would have discovered exactly the same Pythagoras theorem.

Dr. Frenkel goes on to note that mathematical concepts don’t always match to physical reality — Euclidean geometry represents an idealized three-dimensional flat space, whereas our actual universe has curved space.  Nevertheless, mathematical concepts must have an objective reality because “these concepts transcend any specific individual.”

One problem with this argument is the implicit assumption that the human imagination is wholly individualistic and arbitrary, and that if multiple people come up with the same idea, this must demonstrate that the idea exists objectively outside the human mind.  I don’t think this assumption is valid.  It’s perfectly possible for the same idea to be invented by multiple people independently.  Surely if Thomas Edison never lived, someone else would have invented the light bulb.   Does that mean that the light bulb is not a true creation of the imagination, that it was not invented but always existed “objectively” before Edison came along and “discovered” it?  I don’t think so.  Likewise with modern modes of ground transportation, air transportation, manufacturing technology, etc.  They’re all apt to be imagined and invented by multiple people working independently; it’s just that laws on copyright and patent only recognize the first person to file.

It’s true that in other fields of human knowledge, such as literature, one is more likely to find creations that are truly unique.  Yes, Anna Karenina is not likely to be written by someone else in the absence of Tolstoy.  However, even in literature, there are themes that are universal; character names and specific plot developments may vary, but many stories are variations on the same theme.  Consider the following story: two characters from different social groups meet and fall in love; the two social groups are antagonistic toward each other and would disapprove of the love; the two lovers meet secretly, but are eventually discovered; one or both lovers die tragically.  Is this not the basic plot of multiple stories, plays, operas, and musicals going back two thousand years?

Dr. Frenkel does admit that not all mathematical concepts correspond to physical reality.  But if there is not a correspondence to something in physical reality, what does it mean to say that a mathematical concept exists objectively?  How do we prove something exists objectively if it is not in physical reality?

If one looks at the history of mathematics, there is an intriguing pattern in which the earliest mathematical symbols do indeed seem to point to or correspond to objects in physical reality; but as time went on and mathematics advanced, mathematical concepts became more and more creative and distant from physical reality.  These later mathematical concepts were controversial among mathematicians at first, but later became widely adopted, not because someone proved they existed, but because the concepts seemed to be useful in solving problems that could not be solved any other way.

The earliest mathematical concepts were the “natural numbers,” the numbers we use for counting (1, 2, 3 . . .).  Simple operations were derived from these natural numbers.  If I have two apples and add three apples, I end up with five apples.  However, the number zero was initially controversial — how can nothing be represented by something?  The ancient Greeks and Romans, for all of their impressive accomplishments, did not use zero, and the number zero was not adopted in Europe until the Middle Ages.

Negative numbers were also controversial at first.  How can one have “negative two apples” or a negative quantity of anything?  However, it became clear that negative numbers were indeed useful conceptually.  If I have zero apples and borrow two apples from a neighbor, according to my mental accounting book, I do indeed have “negative two apples,” because I owe two apples to my neighbor.  It is an accounting fiction, but it is a useful and valuable fiction.  Negative numbers were invented in ancient China and India, but were rejected by Western mathematicians and were not widely accepted in the West until the eighteenth century.

The set of numbers known explicitly as “imaginary numbers” was even more controversial, since it involved a quantity which, when squared, results in a negative number.  Since there is no known number that allows such an operation, the imaginary numbers were initially derided.  However, imaginary numbers proved to be such a useful conceptual tool in solving certain problems, they gradually became accepted.   Imaginary numbers have been used to solve problems in electric current, quantum physics, and envisioning rotations in three dimensions.

Professor Stephen Hawking has used imaginary numbers in his own work on understanding the origins of the universe, employing “imaginary time” in order to explore what it might be like for the universe to be finite in time and yet have no real boundary or “beginning.”  The potential value of such a theory in explaining the origins of the universe leads Professor Hawking to state the following:

This might suggest that the so-called imaginary time is really the real time, and that what we call real time is just a figment of our imaginations.  In real time, the universe has a beginning and an end at singularities that form a boundary to space-time and at which the laws of science break down.  But in imaginary time, there are no singularities or boundaries.  So maybe what we call imaginary time is really more basic, and what we call real is just an idea that we invent to help us describe what we think the universe is like.  But according to the approach I described in Chapter 1, a scientific theory is just a mathematical model we make to describe our observations: it exists only in our minds.  So it is meaningless to ask: which is real, “real” or “imaginary” time?  It is simply a matter of which is the more useful description.  (A Brief History of Time, p. 144.)

If you have trouble understanding this passage, you are not alone.  I have a hard enough time understanding imaginary numbers, let alone imaginary time.  The main point that I wish to underline is that even the best theoretical physicists don’t bother trying to prove that their conceptual tools are objectively real; the only test of a conceptual tool is if it is useful.

As a final example, let us consider one of the most intriguing of imaginary mathematical objects, the “hypercube.”  A hypercube is a cube that extends into additional dimensions, beyond the three spatial dimensions of an ordinary cube.  (Time is usually referred to as the “fourth dimension,” but in this case we are dealing strictly with spatial dimensions.)  A hypercube can be imagined in four dimensions, five dimensions, eight dimensions, twelve dimensions — in fact, there is no limit to the number of dimensions a hypercube can have, though the hypercube gets increasingly complex and eventually impossible to visualize as the number of dimensions increases.

Does a hypercube correspond to anything in physical reality?  Probably not.  While there are theories in physics that posit five, eight, ten, or even twenty-six spatial dimensions, these theories also posit that the additional spatial dimensions beyond our third dimension are curved up in very, very small spaces.  How small?  A million million million million millionth of an inch, according to Stephen Hawking (A Brief History of Time, p. 179).  So as a practical matter, hypercubes could exist only on the most minute scale.  And that’s probably a good thing, as Stephen Hawking points out, because in a universe with four fully-sized spatial dimensions, gravitational forces would become so sensitive to minor disturbances that planetary systems, stars, and even atoms would fly apart or collapse (pp. 180-81).

Dr. Frenkel would admit that hypercubes may not correspond to anything in physical reality.  So how do hypercubes exist?  Note that there is no limit to how many dimensions a hypercube can have.  Does it make sense to say that the hypercube consisting of exactly 32,458 dimensions exists objectively out there somewhere, waiting for someone to discover it?   Or does it make more sense to argue that the hypercube is an invention of the human imagination, and can have as many dimensions as can be imagined?  I’m inclined to the latter view.

Many scientists insist that mathematical objects must exist out there somewhere because they’ve been taught that a good scientist must be objective and dedicate him or herself to the discovery of things that exist independently of the human mind.  But there’re too many mathematical ideas that are clearly products of the human mind, and they’re too useful to abandon merely because they are products of the mind.

Omnipotence and Human Freedom

Prayson Daniel writes about Christian author C. S. Lewis’s attempt to deal with the problem of evil here and here.  Lewis, who suffered tragic loss at an early age, became an atheist when young, but later converted to Christianity.  Lewis directly addressed the challenge of the atheists’ argument — why would an omnipotent and benevolent God allow evil to exist? — in his books The Problem of Pain and Mere Christianity.

Central to Lewis’s argument is the notion that the freedom to do good or evil is essential to being human.  If human beings were always compelled to do good, they would not be free, and thus would be unable to attain genuine happiness.

One way to illustrate the necessity of freedom is to imagine a world in which human beings were unable to commit evil — no violence, no stealing, no lying, no cheating, no betrayal.  At first, such a world might appear to be a paradise.  But the price would be this: essentially we would all be nothing but robots.  Without the ability to commit evil, doing good would have no meaning.  We would do good simply because we were programmed or compelled to do nothing but good.  There would be no choices because there would be no alternatives.  Love and altruism would have no meaning because it wouldn’t be freely chosen.

Let us imagine a slightly different world, a world in which freedom is allowed, but God always intervenes to reward the good and punish the guilty.  No good people ever suffer.  Earthquakes, fires, disease, and other natural disasters injure and kill only those who are guilty of evil.  Those who do good are rewarded with good health, riches, and happiness.  This world seems only slightly better than the world in which we are robots.  In this second world, we are mere zoo animals or pets.  We would be trained by our master to expect treats when we behave and punishment when we misbehave.  Again, doing good would have no meaning in this world — we would simply be advancing our self-interest, under constant, inescapable surveillance and threat of punishment.  In some ways, life in this world would be almost as regimented and monotonous as in the world in which we are compelled to do good.

For these reasons, I find the “free will” argument for the existence of evil largely persuasive when it comes to explaining the existence of evil committed by human beings.  I can even see God as having so much respect for our freedom that he would stand aside even in the face of an enormous crime such as genocide.

However, I think that the free will argument is less persuasive when it comes to accounting for evils committed against human beings by natural forces — earthquakes, fires, floods, disease, etc.  Natural forces don’t have free will in the same sense that human beings do, so why doesn’t God intervene when natural forces threaten life?  Granted, it would be asking too much to expect that natural disasters happen only to the guilty.  But the evils resulting from natural forces seem to be too frequent, too immense, and too random to be attributed to the necessity of freedom.  Why does freedom require the occasional suffering and death of even small children?  It’s hard to believe that small children have even had enough time to live in order to exercise their free will in a meaningful way.

Overall, the scale of divine indifference in cases of natural disaster is too great for me to think that it is part of a larger gift of free will.  For this reason, I am inclined to think that there are limits on God’s power to make a perfect world, even if the freedom accorded to human beings is indeed a gift of God.

Miracles

The Oxford English Dictionary defines a “miracle” as “a marvelous event occurring within human experience, which cannot have been brought about by any human power or by the operation of any natural agency, and must therefore be ascribed to the special intervention of the Deity or some supernatural being.”  (OED, 1989)  This meaning reflects how the word “miracle” has been commonly used in the English language for hundreds of years.

Since a miracle, by definition, involves a suspension of physical laws in nature by some supernatural entity, the question of whether miracles take place, or have ever taken place, is an important one.  Most adherents of religion — any religion — are inclined to believe in miracles; skeptics argue that there is no evidence to support the existence of miracles.

I believe skeptics are correct that the evidence for a supernatural agency occasionally suspending the normal processes and laws of nature is very weak or nonexistent.  Scientists have been studying nature for hundreds of years; when an observed event does not appear to follow physical laws, it usually turns out that the law is imperfectly understood and needs to be modified, or there is some other physical law that needs to be taken into account.  Scientists have not found evidence of a supernatural being behind observational anomalies.  This is not to say that everything in the universe is deterministic and can be reduced to physical laws.  Most scientists agree that there is room for indeterminacy in the universe, with elements of freedom and chance.  But this indeterminacy does not seem to correspond to what people have claimed as miracles.

However, I would like to make the case that the way we think about miracles is all wrong, that our current conception of what counts as a miracle is based on a mistaken prejudice in favor of events that we are unaccustomed to.

According to the Oxford English Dictionary, the word “miracle” is derived from the Latin word “miraculum,” which is an “object of wonder.” (OED 1989)  A Latin dictionary similarly defines “miraculum” as “a wonderful, strange, or marvelous thing, a wonder, marvel, miracle.” (Charlton T. Lewis, A Latin Dictionary, 1958)  There is nothing in the original Latin conception of miraculum that requires a belief in the suspension of physical laws.  Miraculum is simply about wonder.

Wonder as an activity is an intellectual exercise, but it is also an emotional disposition.  We wonder about the improbable nature of our existence, we wonder about the vastness of the universe, we wonder about the enormous complexity and diversity of life.  From wonder often comes other emotional dispositions: astonishment, puzzlement, joy, and gratitude.

The problem is that in our humdrum, everyday lives, it is easy to lose wonder.  We become accustomed to existence through repeated exposure to the same events happening over and over, and we no longer wonder.  The satirical newspaper The Onion expresses this disposition well: “Miracle Of Birth Occurs For 83 Billionth Time,” reads one headline.

Is it really the case, though, that a wondrous event ceases to be wondrous because it occurs frequently, regularly, and appears to be guided by causal laws?  The birth of a human being begins with blueprints provided by an egg cell and sperm cell; over the course of nine months, over 100,000,000,000,000,000,000,000,000 atoms of oxygen, carbon, hydrogen, nitrogen and other elements gradually come together in the right place at the right time to form the extremely intricate arrangement known as a human being.  If anything is a miraculum, or wonder, it is this event.  But because it happens so often, we stop noticing.  Stories about crying statues, or people seeing the heart of Jesus in a communion wafer, or the face of Jesus in a sock get our attention and are hailed as miracles because these alleged events are unusual.  But if you think about it, these so-called miracles are pretty insignificant in comparison to human birth.  And if crying statues were a frequent event, people would gradually become accustomed to it; after a while, they would stop caring, and start looking around for something new to wonder about it.

What a paradox.  We are surrounded by genuine miracles every day, but we don’t notice them.  So we grasp at the most trivial coincidences and hoaxes in order to restore our sense of wonder, when what we should be doing is not taking so many wonders for granted.

Misunderstanding Manicheanism

A lot of religions and philosophies are misunderstood to varying degrees, but if I had to pick one religion or philosophy as being the most misunderstood it would be Manicheanism.  First propounded by the prophet Mani (or Manes) in Persia in the third century C.E., this religion viewed the universe as consisting of a battle between the forces of light and the forces of darkness.  God was good, but was not all-powerful, which is why there was evil in the world.  Human beings and other material things were a mixture of the forces of light and forces of darkness; the task of human beings was to separate the light from the dark by shunning evil and doing good deeds.

In modern day America, the term “Manichean” is used disparagingly, as a way of attacking those who see political or social conflict as being wars of good vs. evil.  A Manichean view, it is argued or implied, depicts the self as purely good, opponents as demonic, and compromise as virtually impossible.  A recent example of this is a column by George Will about the negotiations over Iran’s nuclear program.  Will describes Iran as being “frightening in its motives (measured by its rhetoric) and barbaric in its behavior,” and quotes author Kenneth Pollack, who notes that Manicheanism was a Persian (Iranian) religion that “conceived of the world as being divided into good and evil.”  Of course, Manicheanism no longer has a significant presence in modern-day Iran, but you get the point — those Persians have always been simple-minded fanatics.

Let’s correct this major misconception right now: Manicheanism does NOT identify any particular tribe, group, religion, or nation as being purely good or purely evil.  Manicheanism sees good and evil as cosmological forces that are mixed in varying degrees in the material things we see all around us.  Humanity, in this view, consists of forces of light (good) mixed with darkness ; the task of humanity is to seek and release this inner light, not to label other human beings as evil and do battle with them.

If anything, Manicheanism was one of the most cosmopolitan and tolerant religions in history.  Manicheanism aimed to be a universal religion and incorporated elements of Christianity, Zoroastrianism, Buddhism, and Hinduism. The most dedicated adherents of Manicheanism were required to adopt a life of nonviolence, including vegetarianism.  For their trouble, Manicheans were persecuted and killed by the Christian, Buddhist, and Muslim societies in which they lived.

The Manichean view of human beings as being a mixture of good and evil is really a mainstream view shared by virtually all religions.  Alexander Solzhenitsyn has described this insight well:

It was granted to me to carry away from my prison years on my bent back, which nearly broke beneath its load, this essential experience: how a human being becomes evil and how good.  In the intoxication of youthful successes I had  felt myself to be infallible, and I was therefore cruel.  In the surfeit of power I was a murderer and an oppressor.  In my most evil moments I was convinced that I was doing good, and I was well supplied with systematic arguments.  It was only  when I lay there on rotting prison straw that I sensed within myself the first stirrings of good.  Gradually it was disclosed to me that the line separating good and evil passes not through states, nor between classes, nor between political parties either, but right through every human heart, and  through all human hearts.  This line shifts.  Inside us, it oscillates with the years.   Even within hearts overwhelmed by evil, one small bridgehead of good is retained; and even in the best of all hearts, there remains a small corner of evil.

Since then I have come to understand the truth of all the religions of the world: they struggle with the evil inside a human being  (inside every human being).  It is impossible to expel evil from the world in its entirety, but it is possible to constrict it within each person.

This is what Manicheanism teaches: the battle between good and evil lies within all humans, not between purely good humans and purely evil humans.

Objectivity is Not Scientific

It is a common perception that objectivity is a virtue in the pursuit of knowledge, that we need to know things as they really are, independent of our mental conceptions and interpretations.  It is also a common perception that science is the form of knowledge that is the most objective, and that is why scientific knowledge makes the most progress.

Yet the principle of objectivity immediately runs into problems in the most famous scientific theory, Einstein’s theory of relativity.  According to relativity theory, there is no objective way to measure objects in space and time — these measures are always relative to observers depending on what velocity the objects and observers are travelling, and observers often end up with different measures for the same object as a result.  For example, objects travelling at a very high speed will appear to be shorter in length to outside observers that are parallel to the path of the object, a phenomenon known as length contraction.  In addition, time will move more slowly for an observer travelling at high speed than an observer travelling at a low speed.  This phenomenon is illustrated in the “twin paradox” — given a pair of twins, if one sets off in a high speed rocket, while the other stays on earth, the twin on the rocket will have aged more slowly than the twin on earth.  Finally, the sequence of two spatially-separated events, say Event A and Event B, will differ according to the position and velocity of the observer.  Some observers may see Event A occurring before Event B, others may see Event B occurring before Event A, and others will see the two events as simultaneous.  There is no objectively true sequence of events.

The theory of relativity does not say that everything is relative.  The speed of light, for example, is the same for all observers, whether they are moving at a fast speed toward a beam of light or away from a beam of light.  In fact, it was the absolute nature of light speed for all moving observers that led Einstein to conclude that time itself must be different for different observers.  In addition, for any two events that are causally-connected, the events must take place in the same sequence for all observers.  In other words, if Event A causes Event B, Event A must precede Event B for all observers.  So relativity theory sees some phenomena as different for different observers and others as the same for different observers.

Finally, the meaning of relativity in science is not that one person’s opinion is just as valid as anyone else’s.  Observers within the same frame of reference (say, multiple observers travelling together in the same vehicle) should agree on measurements of length and time for an outside object even if observers from other reference frames have different results.  If observers within the same vehicle don’t agree, then something is wrong — perhaps someone is misperceiving, or misinterpreting, or something else is wrong.

Nevertheless, if one accepts the theory of relativity, and this theory has been accepted by scientists for many decades now, one has to accept the fact that there is no objective measure of objects in space and time — it is entirely observer-dependent.  So why do many cling to the notion of objectivity as a principle of knowledge?

Historically, the goal of objectivity was proposed as a way to solve the problem of subjective error.  Individual subjects have imperfect perceptions and interpretations.  What they see and claim is fallible.  The principle of objectivity tries to overcome this problem by proposing that we need to evaluate objects as they are in themselves, in the absence of human mind.  The problem with this principle is that we can’t really step outside of our bodies and minds and evaluate an object.

So how do we overcome the problem of subjective error?  The solution is not to abandon mind, but to supplement it, by communicating with other minds, checking for individual error by seeing if others are getting different results, engaging in dialogue, and attempting to come to a consensus.  Observations and experiments are repeated many times by many different people before conclusions are established.  In this view, knowledge advances by using the combined power of thousands and thousands of minds, past and present.  It is the only way to ameliorate the problem of an incorrect relationship between subject and object and making that relationship better.

In the end, all knowledge, including scientific knowledge, is essentially and unalterably about the relationship between subjects and objects — you cannot find true knowledge by splitting objects from subjects any more than you can split H2O into its individual atoms of hydrogen and oxygen and expect to find water in the component parts.

How Powerful is God?

In a previous post, we discussed the non-omnipotent God of process theology as a possible explanation for the twin facts that the universe appears to be fine-tuned for life and yet evolution is extremely slow and life precarious.  The problem with process theology, however, is that God appears to be extremely weak.  Is the concept of a non-omnipotent God worthwhile?

One response to this criticism is that portraying God as weak simply because the universe was not instantaneously and perfectly constructed for life is to misconstrue what the meaning of “weak” is.  The mere fact that the universe, consisting of a least 10,000,000,000,000,000,000,000 stars, was created out of nothingness and has lasted over 13 billion years does not seem to indicate weakness.

Another response would be that the very gradual incrementalism of evolution may be a necessary component of a fantastically complex system that cannot tolerate errors that would threaten to destroy the system.  That is, the various physical laws and numerical constants that underlie the order of the universe exist in such an intricate relationship that a violation of a law in one particular case or sudden change in one of the constants would cause the universe to self-destruct, in the same way that a computer program may crash if a single line of code is incorrect or is incompatible with the other lines of code.

In fact, a number of physicists have explicitly described the universe as a type of computer, in the sense that the order of the universe is based on the processing of information in the form of the physical laws and constants.  Of course, the chief difference between the universe and a computer is that we can live with a computer crashing occasionally — we cannot live with the universe crashing even once.  Thus the fact that the universe, while not immortal, never seems to crash, indicates that gradual evolution may be necessary.  Perhaps instability on the micro level of the universe (an asteroid occasionally crashing into a planet with life) is the price to be paid for stability on the macro level.

Alternatively, we can conceptualize the order behind the universe as a type of mind, “mind” being defined broadly as any system for processing information.  We can posit three types of mind in the historical development of the universe: cosmic mind (God), biological mind (human/animal mind), and electronic mind (computer).

Cosmic mind can be thought of as pure spirit, or pure information, if you will.  Cosmic mind can create matter and a stable foundation for the universe, but once matter is created, the influence of spirit on matter is relatively weak.  That is, there is a division between the world of spirit and the world of matter that is difficult to bridge.  Biological mind does not know everything cosmic mind does and it is limited in time and space, but biological mind can more efficiently act on matter, since it is part of the world of matter.  Electronic mind (computer) is a creation of biological mind but processes larger amounts of information more quickly, assisting biological mind in the manipulation of matter.

As a result, the evolution of the universe began very slowly, but has recently accelerated as a result of incremental improvements to mind.  According to Stephen Hawking,

The process of biological evolution was very slow at first. It took two and a half billion years, to evolve from the earliest cells to multi-cell animals, and another billion years to evolve through fish and reptiles, to mammals. But then evolution seemed to have speeded up. It only took about a hundred million years, to develop from the early mammals to us. . . . [W]ith the human race, evolution reached a critical stage, comparable in importance with the development of DNA. This was the development of language, and particularly written language. It meant that information can be passed on, from generation to generation, other than genetically, through DNA. . . .  [W]e are now entering a new phase, of what might be called, self designed evolution, in which we will be able to change and improve our DNA. . . . If this race manages to redesign itself, to reduce or eliminate the risk of self-destruction, it will probably spread out, and colonise other planets and stars.  (“Life in the Universe“)

According to physicist Freeman Dyson (Disturbing the Universe), even if interstellar spacecraft achieve only one percent of the speed of light, a speed within the possibility of present-day technology, the Milky Way galaxy could be colonized end-to-end in ten million years –  a very long time from an individual human’s perspective, but a remarkably short time in the history of evolution, considering it took 2.5 billion years simply to make the transition from single-celled life forms to multi-celled creatures.

So cosmic mind can be very powerful in the long run, but patience is required!

A Universe Half Full?

It has often been said that the difference between a pessimist and an optimist is that a pessimist sees a half-poured beverage as a glass half empty, whereas an optimist sees the glass as being half full.  I think the decision to adopt or reject atheism may originate from such a perspective — that is, atheists see the universe as half empty, whereas believers see the universe as half full.  We all go through life experiencing events both good and bad, moments of joy, beauty, and wonder, along with moments of despair, ugliness, and boredom.  When we experience the positive, we may be inclined to attribute purpose and benevolence to the universal order; when we experience the negative, we may be more apt to attribute disorder and meaninglessness to the universe.

So, is it all a matter of perspective?  If we are serious thinkers, we have to reject the conclusion that it is merely a matter of perspective.  Either there is a God or there isn’t.  If we are going to explain the universe, we have to explain everything, good and bad, and not neglect facts that don’t fit.

The case for atheism is fairly straightforward: the facts of science indicate a universe that is not very hospitable to either the emergence of life or the protection of life, which greatly undercuts the case for an intelligent designer.  Most planets have no life, except perhaps for the most primitive, insignificant forms of life.  Where life does exist, life is precarious and cruel; on a daily basis, life forms are attacked and destroyed by hostile physical forces and other life forms.  There is not the slightest historical and archeological evidence of a “golden age” or a “Garden of Eden” which once existed but was lost because of man’s sinfulness; life has always been precarious and cruel.  Even where life has developed, it has developed in a process of very gradual evolution, consisting of much randomness, over the course of billions of years.  And even despite progress after billions of years, life on earth has been subject to occasional mass extinction events, from an asteroid or comet striking the planet, to volcanic eruptions, to dramatic climate change.  Even if one granted that God created life very gradually, the notion that God would allow a dumb rock from space to wipe out the accomplishments of several billions of years of evolution seems inexplicable.

The case for belief in God rests on a contrary claim, namely that order in the universe is too complex and unusual to be explained merely by reference to purposeless physical laws and random events.  It may appear that physical laws operate without apparent purpose, such as when an asteroid causes mass extinction, and evolution certainly consists of many random events.  But there is too much order to subscribe to the view that the universe is nothing but blind laws and random events.  When one studies the development of the stars and planets and their predictable motions, the vast diversity and complexity of life on earth, and the amount of information contained in a single DNA molecule, randomness is not the first thing one thinks of.  Total randomness implies total disorder and a total lack of pattern, but the randomness we see in the universe takes place within a certain structure.  If you roll a die, there are six possible outcomes; if you flip a coin there are two possible outcomes.  Both actions are random, but a structure of order determines the range of possible outcomes.  Likewise, there is randomness and disorder in the universe, but there is a larger structure of order that provides general stability and restricts outcomes.  Mutations take place in life forms, but these mutations are limited and incremental, restricting the range of possible outcomes and allowing the development of new forms of life on top of old forms of life.

Physicists tend to agree that we appear to live in a universe “fine-tuned” for life, in the sense that many physical constants can only exist with certain values, or life would not be able to evolve.  According to Stephen Hawking, “The laws of science, as we know them at present, contain many fundamental numbers, like the size of the electric charge of the electron and the ratio of the masses of the proton and electron. . . . The remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life.”  Physicist Paul Davies writes:

 [L]ife as we know it depends very sensitively on the form of the laws of physics, and on some seemingly fortuitous accidents in the actual values that nature has chosen for various particle masses, force strengths, and so on. . . . [I]f we could play God, and select values for these quantities at whim by twiddling a set of knobs, we would find that almost all knob settings would render the universe uninhabitable.  In some cases it seems as if the different knobs have to be fine-tuned to enormous precision if the universe is to be such that life will flourish. (The Mind of God, pp. 199-200).

The counterargument to the “fine-tuned” argument is that there could exist many universes that self-destruct in a short period of time or don’t have life — we just happen to live in a fine-tuned universe because only a fine-tuned universe can allow the existence of life forms that think about how fine-tuned the universe is!  However, this argument rests on the hypothetical belief that many alternative universes have existed or do exist, and until there is evidence for other universes, it must remain highly speculative.

So how do we reconcile the two sets of facts presented by the atheists and the believers?  On the one hand, the universe appears to allow life to develop only extremely gradually under often hostile conditions, with many setbacks along the way.  On the other hand, the universe appears to be fine-tuned to support life, suggesting some sort of cosmic purpose or intelligence.

In my view, the only way to reconcile the two sets of facts is to conceive of God as being very powerful, but not omnipotent.  (See a previous posting on this subject.)  According to process theology, God’s power is not coercive but persuasive, and God acts over long periods of time to create.  Existing things are not subject to total central control, but God can influence outcomes.

An analogy could be made with the human mind and its control over the body.  It is easy to raise one’s right arm by using one’s thoughts, but to pitch a fastball, play a piano, or make a high-quality sculpture requires a level of coordination and skill that most of us do not have — as well as an extraordinary amount of training and practice.  In the course of life, we attempt many things, but are never successful at all we attempt; in fact, the ambitions in our minds usually outpace our physical abilities.  Some people do not even have the ability to raise their right arm.  The relation of a cosmic mind to the “body” of the universe may be similar in principle.

Some would object that the God of process theology is ridiculously weak.  A God that has only the slightest influence over matter and cannot even stop an asteroid from hitting a planet does not seem like a God worth worshiping or even respecting.  In fact, why do we even need the concept of a weak God — wouldn’t we be better off without it?  I will address this topic in a future posting.

The Role of Imagination in Science, Part 2

In a previous posting, we examined the status of mathematical objects as creations of the human mind, not objectively existing entities.  We also discussed the fact that the science of geometry has expanded from a single system to a great many systems, with no single system being true.  So what prevents mathematics from falling into nihilism?

Many people seem to assume that if something is labeled as “imaginary,” it is essentially arbitrary or of no consequence, because it is not real.  If something is a “figment of imagination” or “exists only in your mind,” then it is of no value to scientific knowledge.  However, two considerations impose limits or restrictions on imagination that prevent descent into nihilism.

The first consideration is that even imaginary objects have properties that are real or unavoidable, once they are proposed.  In The Mathematical Experience, mathematics professors Philip J. Davis and Reuben Hersh argue that mathematics is the study of “true facts about imaginary objects.”  This may be a difficult concept to grasp (it took me a long time to grasp it), but consider some simple examples:

Imagine a circle in your mind.  Got that?  Now imagine a circle in which the radius of the circle is greater than the circumference of the circle.  If you are imagining correctly, it can’t be done.  Whether or not you know that the circumference of a circle is equal to twice the radius times pi, you should know that the circumference of a circle is always going to be larger than the radius.

Now imagine a right triangle.  Can you imagine a right triangle with a hypotenuse that is shorter than either of the two other sides?  No, whether or not you know the Pythagorean theorem, it’s in the very nature of a right triangle to have a hypotenuse that is longer than either of the two remaining sides.  This is what we mean by “true facts about imaginary objects.”  Once you specify an imagined object with certain basic properties, other properties follow inevitably from those initial, basic properties.

The second consideration that puts restrictions on the imagination is this: while it may be possible to invent an infinite number of mathematical objects, only a limited number of those objects is going to be of value.  What makes a mathematical object of value?  In fact, there are multiple criteria for valuing mathematical objects, some of which may conflict with each other.

The most important criterion of mathematical objects according to scientists is the ability to predict real-world phenomena.  Does a particular equation or model allow us to predict the motion of stars and planets; or the multiplication of life forms; or the growth of a national economy?  This ability to predict is a most powerful attribute of mathematics — without it, it is not likely that scientists would bother using mathematics at all.

Does the ability to predict real-world phenomena demonstrate that at least some mathematical objects, however imaginary, at least correspond to or model reality?  Yes — and no.  For in most cases it is possible to choose from a number of different mathematical models that are approximately equal in their ability to predict, and we are still compelled to refer to other criteria in choosing which mathematical object to use.  In fact, there are often tradeoffs when evaluating various criteria — often, so single mathematical object is best on all criteria.

One of the most important criteria after predictive ability is simplicity.  Although it has been demonstrated that Euclidean geometry is not the only type of geometry, it is still widely used because it is the simplest.  In general, scientists like to begin with the simplest model first; if that model becomes inadequate in predicting real-world events, they modify the model or choose a new one.  There is no point in starting with an unnecessarily complex geometry, and when one’s model gets too complex, the chance of error increases significantly.  In fact, simplicity is regarded as an important aspect of mathematical beauty — a mathematical proof that is excessively long and complicated is considered ugly, while a simple proof that provides answers with few steps is beautiful.

Another criterion for choosing one mathematical object over another is scope or comprehensiveness.  Does the mathematical object apply only in limited, specific circumstances?  Or does it apply broadly to phenomena, tying together multiple events under a single model?

There is also the criterion of fruitfulness.  Is the model going to provide many new research findings?  Or is it going to be limited to answering one or two questions, providing no basis for additional progress?

Ultimately, it’s impossible to get away from value judgments when evaluating mathematical objects.  Correspondence to reality cannot be the only value.  Why do we use the Hindu-Arabic numeral system today and not the Roman numeral system?  I don’t think it makes sense to say that the Hindu-Arabic system corresponds to reality more accurately than the Roman numeral system.  Rather, the Hindu-Arabic numeral system is easier to use for many calculations, and it is more powerful in obtaining useful results.  Likewise a base 10 numeral system doesn’t correspond to reality more accurately than a base 2 numeral system — it’s just easier for humans to use a base 10 system.  For computers, it is easier to use a base 2 system.  A base 60 system, such as the ancient Babylonians used, is more difficult for many calculations than a base 10, but it is more useful in measuring time and angles.  Why?  Because 60 has so many divisors (1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60) it can express fractions of units more simply, which is why we continue to use a modified version of base 60 for measuring time and angles (and geographic coordinates) to this day.

What about mathematical objects that don’t predict real world events or appear to model anything in reality at all?  This is the realm of pure mathematics, and some mathematicians prefer this realm to the realm of applied mathematics.  Do we make fun of pure mathematicians for wasting time on purely imaginary objects?  No, pure mathematics is still a form of knowledge, and mathematicians still seek beauty in mathematics.

Ultimately, imaginative knowledge is not arbitrary or inconsequential; there are real limits even for the imagination.  There may be an infinite number of mathematical systems that can be imagined, but only a limited number will be good.  Likewise, there is an infinite variety of musical compositions, paintings, and novels that can be created by the imagination, but only a limited number will be good, and only a very small number will be truly superb.  So even the imagination has standards, and these standards apply as much to the sciences as to the arts.

The Role of Imagination in Science, Part 1

In Zen and the Art of Motorcycle Maintenance, author Robert Pirsig argues that the basic conceptual tools of science, such as the number system, the laws of physics, and the rules of logic, have no objective existence, but exist in the human mind.  These conceptual tools were not “discovered” but created by the human imagination.  Nevertheless we use these concepts and invent new ones because they are good — they help us to understand and cope with our environment.

As an example, Pirsig points to the uncertain status of the number “zero” in the history of western culture.  The ancient Greeks were divided on the question of whether zero was an actual number – how could nothing be represented by something? – and did not widely employ zero.  The Romans’ numerical system also excluded zero.  It was only in the Middle Ages that the West finally adopted the number zero by accepting the Hindu-Arabic numeral system.  The ancient Greek and Roman civilizations did not neglect zero because they were blind or stupid.  If future generations adopted the use of zero, it was not because they suddenly discovered that zero existed, but because they found the number zero useful.

In fact, while mathematics appears to be absolutely essential to progress in the sciences, mathematics itself continues to lack objective certitude, and the philosophy of mathematics is plagued by questions of foundations that have never been resolved.  If asked, the majority of mathematicians will argue that mathematical objects are real, that they exist in some unspecified eternal realm awaiting discovery by mathematicians; but if you follow up by asking how we know that this realm exists, how we can prove that mathematical objects exist as objective entities, mathematicians cannot provide an answer that is convincing even to their fellow mathematicians.  For many decades, according to mathematicians Philip J. Davis and Reuben Hersh, the brightest minds sought to provide a firm foundation for mathematical truth, only to see their efforts founder (“Foundations , Found and Lost,” in The Mathematical Experience).

In response to these failures, mathematicians divided into multiple camps.  While the majority of mathematicians still insisted that mathematical objects were real, the school of fictionalism claimed that all mathematical objects were fictional.  Nevertheless, the fictionalists argued that mathematics was a useful fiction, so it was worthwhile to continue studying mathematics.  In the school of formalism, mathematics is described as a set of statements of the consequences of following certain rules of the game — one can create many “games,” and these games have different outcomes resulting from different sets of rules, but the games may not be about anything real.  The school of finitism argues that only the natural numbers (i.e., numbers for counting, such as 1, 2, 3. . . ) and numbers that can be derived from the natural numbers are real, all other numbers are creations of the human mind.  Even if one dismisses these schools as being only a minority, the fact that there is such stark disagreement among mathematicians about the foundations of mathematics is unsettling.

Ironically, as mathematical knowledge has increased over the years, so has uncertainty.  For many centuries, it was widely believed that Euclidean geometry was the most certain of all the sciences.  However, by the late nineteenth century, it was discovered that one could create different geometries that were just as valid as Euclidean geometry — in fact, it was possible to create an infinite number of valid geometries.  Instead of converging on a single, true geometry, mathematicians have seemingly gone into all different directions.  So what prevents mathematics from falling into complete nihilism, in which every method is valid and there are no standards?  This is an issue we will address in a subsequent posting.