Christopher Hitchens: An Excess of Errors

I recently finished reading the late Christopher Hitchens’ book god is not Great: How Religion Poisons Everything.

In some parts, the book is delightful, and I admire the author’s courage.  Although the social penalties for atheism are much less in contemporary democratic societies than in other societies, past and present, there is also personal courage in facing up to the possibility that there is no God and no afterlife, which can be a distressing and demoralizing experience for many.  The author’s main points about the inaccuracy or falsity of religious beliefs about cosmology and history, as well as the persistent use of religion historically to rationalize evil behavior (such as the trading or keeping of slaves) have been made by others, but the author’s arguments are not entirely unoriginal, and I definitely learned some new things.

Having said that, I also need to say this: god is not Great is filled with many errors — in many cases, obvious, egregious errors that should not have gotten past the editor’s desk.  (Do publishing houses even bother editing and fact-checking any more?)  Now, it is not unusual for even great scholarly books to have some errors of fact.  But when the errors are so numerous, and so significant, it can greatly undermine the case the author is making.  Frankly, I think Hitchens understands religion about as well as a fundamentalist understands evolution.  In a few cases, Hitchens does not even understand some basic facts of science.

Let us review the errors.  (Page numbers are from the paperback edition, which appear to be similar to page numbers in the hardcover edition, except for the afterword that was added to the paperback).

p. 5  – “We [atheists] do not believe in heaven or hell, yet no statistic will ever find that without these blandishments and threats we commit more crimes of greed and violence than the faithful.  (In fact, if a proper statistical query could ever be made, I am sure the evidence would be the other way).”  – Actually, according to The Handbook of Crime Correlates (pp. 108-113), while there is some variation in studies, the majority of social science statistical studies have concluded that religious believers are less likely to engage in criminal behavior.  This is by no means a slam-dunk, as a minority of studies point the other way, but I find it remarkable that Hitchens thought that nobody even bothered to study this issue.  Although the Handbook came out after Hitchens’ book was published, the studies cited in the Handbook go back decades.

pp. 7, 63  – Hitchens acknowledges the intelligence and scholarship of theologians such as Augustine, Aquinas, Maimonides, and Newman, but argues “there are no more of them today and . . . there will be no more of them tomorrow.”  The reason for this, he writes, is that “Faith of that sort — the sort that can stand up at least for a while in a confrontation with reason — is now plainly impossible.”  Actually, there are numerous intelligent and accomplished modern theologians who have incorporated faith and reason into their world views, including Paul Tillich, Reinhold Niebuhr, and Karl Barth.  Pope John Paul II pursued graduate study in philosophy and incorporated insights from the philosophy of phenomenology into his doctoral dissertation.  Did Hitchens ever hear of these people and their works?  A quick Google search confirms that Hitchens did know of Niebuhr, which indicates to me that Hitchens was being dishonest.

p. 7 – “Religion spoke its last intelligible or noble or inspiring words a long time ago: either that or it mutated into an admirable but nebulous humanism, as did, say, Dietrich Bonhoeffer, a brave Lutheran pastor hanged by the Nazis for his refusal to collude with them.”  Dietrich Bonhoeffer was far from being a nebulous humanist.  In fact, Bonhoeffer’s theological ideas were fairly conservative and Bonhoeffer insisted on the need for total devotion to God and the saving grace of Jesus Christ.  “I believe that the Bible alone is the answer to all our questions,” Bonhoeffer once wrote.  Also, Bonhoeffer was not hanged for simply refusing to collude with the Nazis, but for actively opposing the Nazis and conspiring to assassinate Hitler.

pp. 12-13 – “there is a real and serious difference between me and my religious friends, and the real and serious friends are sufficiently honest to admit it.  I would be quite content to go their children’s bar mitzvahs to marvel at their Gothic cathedrals, to “respect” their belief that the Koran was dictated, though exclusively in Arabic, to an illiterate merchant, or to interest myself in Wicca and Hindu and Jain consolations.  And as it happens, I will continue to do this without insisting on the polite reciprocal condition — which is that they in turn leave me alone.  But this, religion is ultimately incapable of doing.”  Let’s leave aside the curious claim that Hitchens has religious friends who all happen to be grossly intolerant (unlucky him).  What is the evidence that religion in general is hopelessly intolerant, including the Jain religion?  Jainism, which Hitchens doesn’t bother discussing in any detail, places nonviolence at the very center of its beliefs.  Jains are so nonviolent that they practice vegetarianism and go to great lengths to avoid killing insects; some Jains even refuse to eat certain plants.  Jainism influenced Gandhi’s civil disobedience campaign, which in turn influenced Martin Luther King Jr.s’ own nonviolence campaign.  Yet somehow those Jains just can’t leave Hitchens alone.  What a bizarre persecution complex.

pp. 25, 68 – Hitchens argues that the ancient works of Aristotle and other Greeks were lost under Christianity because “the Christian authorities had burned some, suppressed others, and closed the schools of philosophy, on the grounds that there could have been no useful reflections on morality before the preaching of Jesus.”  Actually, the works of Aristotle and other Greeks were lost for centuries in Western Europe, primarily because of the collapse of the Roman empire in the west, which negatively affected education, scholarship, libraries, and book-making in general.  In the east, the Byzantine empire, though a Christian state, preserved the works of Aristotle and incorporated Aristotle’s thoughts into Byzantine philosophiesMonasteries in the Byzantine empire played an important role in preserving and copying books of the ancient Greeks.  Attitudes of Christians in Western Europe toward the philosophies of ancient Greece were mixed, with some condemning and suppressing Greek works, and others incorporating Greek works into their scholarship.

pp. 46-47 – “The attitude of religion to medicine, like the attitude of religion to science, is always necessarily problematic and very often necessarily hostile.”  Historically, medicine was not an alternative to prayer and devotion to God but a supplement to it.  The earliest hospitals were established in religious temples devoted to gods of healing.  While medical knowledge was primitive compared to today, even the ancients had some practical knowledge of surgery and anesthesia.  Many modern-day medications, such as aspirin, quinine, and ephedrine, have their roots in plants that the ancients used for healing.  The father of western medicine, Hippocrates, is famously known for his oath to the gods of healing, which calls for adherence to ethical rules in the practice of medicine.  And historically, both Christianity and Islam played major roles in the founding of hospitals and the study of medical science.

p. 68 – “[E]ven the religious will speak with embarrassment of the time when theologians would dispute over futile propositions with fanatical intensity: measuring the length of angels’ wings, for example, or debating how many such mythical creatures could dance on the head of a pin.”  The notion that theologians debated about how many angels danced on the head of a pin was actually an invention of post-medieval satirists who wanted to criticize theology.  Historically, theologians generally held that angels were incorporeal, or purely spiritual beings, and as such did not have “wings.”

p. 144 – While discussing persons who claim to have been visited by extraterrestrials, Hitchens argues, “travel from Alpha Centauri . . . would involve some bending of the laws of physics.”  Actually, Alpha Centauri is the closest star system to our own, a little over 4 light years away.  While I think it is most unlikely that extraterrestrials have visited earth, travel to or from Alpha Centauri would not require any bending of the laws of physics, only some incremental improvements in existing technologies based on the current laws of physics.  The travel would probably take decades, but would not be impossible.  Either Hitchens is arguing that interstellar travel is inherently impossible or he is claiming that advances in technology require “bending” the laws of physics.  Whatever he believed, it doesn’t make sense.

p. 181 – “As far as I am aware, there is no country in the world today where slavery is still practiced where the justification of it is not derived from the Koran.”  Among the countries ranked highest in modern-day slavery are several Islamic counties, but also China, Russia, Thailand, and Haiti.  It would be odd if these countries cited the Koran as a justification for slavery.

p. 192 – Pointing to the Rwandan genocide, Hitchens argues, “At a minimum, this makes it impossible to argue that religion causes people to behave in a more kindly or civilized manner.  The worse the offender, the more devout he turns out to be.”  Among the worst practitioners of genocide in the past hundred years were atheists, including Stalin, Mao Tse Tung, and Pol Pot.  It is not clear whether Hitler was an atheist or a deist, but he was certainly not “devout.”  Finally, the majority of social science studies have shown that those with orthodox religious beliefs are less inclined to commit crime.

p. 232. – Hitchens attempts to argue that atheist totalitarian regimes are actually religious in nature: “[T]he object of perfecting the species — which is the very root and source of the totalitarian impulse — is in essence a religious one.”  Actually, a major point of most religions is that perfection on earth is not possible, that perfection is only found in an other-worldly place called heaven or nirvana.  The communist critique of religion is precisely that it makes people satisfied with their lot on earth, waiting and longing for a world that never comes.

p. 279 – Hitchens makes a reference to “Iran’s progress in thermonuclear fission.”  The correct terminology is “nuclear fission,” not “thermonuclear fission.”  “Thermonuclear” refers to the use of very high temperatures to cause the fusion of atomic nuclei, not fission.  It is possible to use a thermonuclear process involving hydrogen and boron to cause the fission of boron atoms, but this is not what Iran is currently doing.

p. 283 – “The study of literature and poetry, both for its own sake and for the eternal ethical questions with which it deals, can now easily depose the scrutiny of sacred texts that have been found to be corrupt and confected.”  After dismissing religious stories as fictional, Hitchens argues that we can obtain ethical guidance from . . . the fictions of literature and poetry.  Never mind that religious texts are also powerful sources of literature and poetry, that Jesus used parables to illustrate ethics, and that Church Fathers often interpreted the myths of the Bible allegorically.  Only secular sources of fiction, in Hitchens’ view, can be used as a guide to ethics.  Why is not clear.

Well, that’s it.  Reading Hitchens’ book was occasionally enjoyable, but more often exhausting.  There’s only so many blatant falsehoods a person can handle without wanting to flee.

 

Two Types of Religion

Debates about religion in the West tend to center around the three monotheistic religions — Judaism, Christianity, and Islam.  However, it is important to note that these three religions are not necessarily typical or representative of religion in general.

In fact, there are many different types of religion, but for purposes of simplicity I would like to divide the religions of the world into two types: revealed religion and philosophical religion.  These two categories are not exclusive, and many religions overlap both categories, but I think it is a useful conceptual divide.

“Revealed religion” has been defined as a “religion based on the revelation by God to man of ideas that he would not have arrived at by his natural reason alone.”  The three monotheistic religions all belong in this category, though there are philosophers and elements of philosophy in these religions as well.  Most debates about religion and science, or religion and reason, assume that all religions are revealed religions.  However, there is another type of religion: philosophical religion.

Philosophical religion can be defined as a set of religious beliefs that are arrived at primarily through reason and dialogue among philosophers.  The founders of philosophical religion put forth ideas on the basis that these ideas are human creations accessible to all and subject to discussion and debate like any other idea.  These religions are found in the far east, and include Confucianism, Taoism, and Hinduism.  However, there are also philosophical religions in the West, such as Platonism or Stoicism, and there have been numerous philosophers who have constructed philosophical interpretations of the three monotheistic religions as well.

There are a number of crucial distinguishing characteristics that separate revealed religion from philosophical religion.

Revealed religion originates in a single prophet, who claims to have direct communication with God.  Even when historical research indicates multiple people playing a role in founding a revealed religion, as well as the borrowing of concepts from other religions, the tradition and practice of revealed religion generally insists upon the unique role of a prophet who is usually regarded as infallible or close to infallible — Moses, Jesus, or Muhammad.  Revealed religion also insists on the existence of God, often defined as a personal, supreme being who has the qualities of omniscience and omnipotence.  (It may seem obvious to many that all religions are about God, but that is not the case, as will be discussed below.)

Faith is central to revealed religion.  Rational argument and evidence may be used to convince others of the merits of a revealed religion, but ultimately there are too many fundamental beliefs in a revealed religion that are either non-demonstrable or contradictory to evidence from science, history, and archeology.  Faith may be used positively, as an aid to making a decision in the absence of clear evidence, so that one does not sustain loss from despair and a paralysis of will; however, faith may also be used negatively, to deny or ignore findings from other fields of knowledge.

The problems with revealed religion are widely known: these religions are prone to a high degree of superstition and many followers embrace anti-scientific attitudes when the conclusions of science refute or contradict the beliefs of revealed religion.  (This is a tendency, not a rule — for example, many believers in revealed religion do not regard a literal interpretation of the Garden of Eden story as central to their beliefs, and they fully accept the theory of evolution.)  Worse, revealed religions appear to be prone to intolerance, oppression of non-believers and heretics, and bloody religious wars.  It seems most likely that this intolerance is the result of a belief system that sees a single prophet as having a unique, infallible relationship to God, with all other religions being in error because they lack this relationship.

Philosophical religion, by contrast, emerges from a philosopher or philosophers engaging in dialogue.  In the West, this role was played by philosophers in ancient Greece and Rome, before their views were eclipsed by the rise of the revealed religion of Christianity.  In the East, philosophers were much more successful in establishing great religions.  In China, Confucius established a system of beliefs about morals and righteous behavior that influenced an entire empire, while Lao Tzu proposed that a mysterious power known as the “Tao” was the source and driving force behind everything.  In India, Hinduism originated as a diverse collection of beliefs by various philosophers, with some unifying themes, but no single creed.

As might be expected, philosophical religions have tended to be more tolerant and cosmopolitan than revealed religions.  Neither Greek nor Roman philosophers were inclined to kill each other over the finer points of Plato’s conception of God or the various schools of Stoicism, because no one ever claimed to have an infallible relationship with an omnipotent being.  In China, Confucianism, Taoism, and Buddhism are not regarded as incompatible, and many Chinese subscribe to elements of two or all three belief systems.  It is rare to ever see a religious war between adherents of philosophical religions.  And although many people automatically equate religion with faith, there is usually little or no role for faith in philosophical religions.

The role of God in philosophical religions is very different from the role of God in revealed religions.  Most philosophers, in east and west, defined God in impersonal terms, or proposed a God that was not omnipotent, or regarded a Creator God as unimportant to their belief system.  For example, Plato proposed that a secondary God known as a “demiurge” was responsible for creating the universe; the demiurge was not omnipotent, and was forced to create a less-than-perfect universe out of the imperfect materials he was given.  The Stoics did not subscribe to a personal God and instead proposed that a divine fire pervaded the universe, acting on matter to bring all things into accordance with reason.  Confucius, while not explicitly rejecting the possibility of God, did not discuss God in any detail, and had no role for divine powers in his teachings.  The Tao of Lao Tzu is regarded as a mysterious power underlying all things, but it is certainly not a personal being.  Finally, the concept of a Creator God is not central to Hinduism; in fact one of the six orthodox schools of Hinduism is explicitly atheistic, and has been for over two thousand years.

There are many virtues to philosophical religion.  While philosophical religion is not immune to the problem of incorrect conceptions and superstition, it does not resist reason and science, nor does it attempt to stamp out challenges to its claims to the same extent as revealed religions.  Philosophical religion is largely tolerant and reasonable.

However, there is also something arid and unsatisfying about many philosophical religions.  The claims of philosophical religion are usually modest, and philosophical religion has cool reason on its side.  But philosophical religion often does not have the emotional and imaginative content of revealed religion, and in these ways it is lacking. The emotional swings and imaginative leaps of revealed religion can be dangerous, but emotion and imagination are also essential to full knowledge and understanding (see here and here).  One cannot properly assign values to things and develop the right course of action without the emotions of love, joy, fear, anger, and sadness.  Without imagination, it is not possible to envision better ways of living.  When confronted with mystery, a leap of faith may be justified, or even required.

Abstractly, I have a great appreciation for philosophical religion, but in practice, I prefer Christianity.  I have the greatest admiration for the love of Christ, and I believe in Christian love as a guide for living.  At the same time, my Christianity is unorthodox and leavened with a generous amount of philosophy.  I question various doctrinal points of Christianity, I believe in evolution, and I don’t believe in miracles that violate the physical laws that have been discovered by science.  I think it would do the world good if revealed religions and philosophical religions recognized and borrowed each other’s virtues.

A Living, Intelligent Universe

A fascinating article in the December 23, 2013 issue of The New Yorker discusses the latest research on the behavior of plants, and the disputes among scientists as to whether this indicates that plants have intelligence.  In brief, the article summarizes research indicating the following:  Plants can sense light, moisture, gravity, and pressure, and they use these inputs to determine an optimal growth path.  In addition, plants can sense a variety of chemicals and microbes in soil, as well as chemical signals from other plants.  One scientists estimates that an average plant has three thousands chemicals in its vocabulary.  When plants are attacked or injured, whether by insects, animals, or humans, they produce an anesthetic.  In fact, many of the chemicals we use today, from caffeine to aspirin and other drugs, were originally developed by plants as defense mechanisms against attack.  Plants under attack will also emit a chemical distress signal to other plants, which prompts the other plants to initiate their own defense mechanisms (for example, plants will produce toxins that make them less tasty or digestible to animals, or they will emit signals to predator insects who will attack the plant-eating insects).  Plants compete with other plants for resources, but they also cooperate with each other to an amazing degree, sharing resources with younger or weaker plants.  In fact, trees employ an underground fungi to exchange resources as well as information.  Scientists have jokingly referred to this exchange system as the “wood-wide web.”

Most of these observations regarding plant behavior are not disputed among scientists.  What is disputed is the issue of whether or not this behavior constitutes intelligence.  There is a consensus that plants do not have a central organ that performs the functions of a brain, and it is agreed that plants do not have the abstract reasoning skills that a human being would have.  However, a number of scientists argue that such a definition of intelligence is too restrictive.  They propose that plants do have intelligence, defined as “an intrinsic ability to process information from both abiotic and biotic stimuli that allows optimal decisions about future activities in a given environment.”  Or more simply, says one scientist, “Intelligence is the ability to solve problems.”  In fact, this same scientist is currently working with a computer scientist to design a plant-based computer, “modeled on the distributed computing performed by thousands of roots processing a vast number of environmental variables.”  Such an attempt would build upon previous efforts to construct computers based on the information processing capabilities of slime molds and DNA molecules.

What is fascinating about this new research is that it continues a trend in human knowledge in which our initial criteria for intelligent life has had to be gradually expanded to include more and more species formerly regarded as mindless.  This raises the issue: is there in fact a clear dividing line between mindless matter and intelligent life, or is there simply a continuum, with human beings having the most advanced intelligence, animal and plant life having a more primitive intelligence, and the fundamental components of matter (molecules, atoms, physical forces, etc.) having a very primitive form of embedded intelligence.  In this view, the components of matter do not have consciousness in the same way that humans or animals do, but they do “know” how to do certain things.  In the case of the components of matter, they may “know” only how to do one or two things, such as form combinations with other components of matter.  But even this primitive knowledge is a form of knowledge nonetheless.

Viewing intelligence as something inherent in all things is part of the theory of hylozoism, which posits that the entire universe is in some sense alive.  Hylozoism goes back to the ancient Greek philosophers and has been proposed at various times by different thinkers since then.  The Renaissance friar and scientist Giordano Bruno was a proponent of hylozoism, among other heresies, and was burnt at the stake by the Catholic Church.

Is viewing the universe as alive and intelligent outrageous?  Consider the definition of “intelligence” put forth by the scientists studying plant life: the ability to “process information” or “solve problems.”  This definition actually encompasses many or most of the functions of the physical laws of the universe, according to many physicists.  In their view, the universe can be conceptualized as an information-processing mechanism, a vast computer.  In fact, the 19th century English mathematician Charles Babbage, who built the first mechanical calculating device and is widely known as the “father of the computer,” believed that the universe could indeed be conceptualized as an immense computer, with the laws of the universe serving as the program.

This view of universal intelligence is not the same as the traditional view of an omniscient and omnipotent being standing above the universe and directing all of its affairs — which is why Giordano Bruno was burned at the stake.  But the view of a universe with an embedded intelligence existing in all things is an intriguing alternative to the view that sees a sharp distinction between intelligent beings — divine or human — and allegedly mindless matter.  Rather than viewing the universe as something mindless that is acted upon by an external intelligence, perhaps it is better to conceive of the universe as having an inherent intelligence that grows more complex over time.

The Role of Emotions in Knowledge

In a previous post, I discussed the idea of objectivity as a method of avoiding subjective error.  When people say that an issue needs to be looked at objectively, or that science is the field of knowledge best known for its objectivity, they are arguing for the need to overcome personal biases and prejudices, and to know things as they really are in themselves, independent of the human mind and perceptions.  However, I argued that truth needs to be understood as a fruitful or proper relationship between subjects and objects, and that it is impossible to know the truth by breaking this relationship.

One way of illustrating the relationship between subjects and objects is by examining the role of human emotions in knowledge.  Emotions are considered subjective, and one might argue that although emotions play a role in the form of knowledge known as the humanities (art, literature, religion), emotions are either unnecessary or an impediment to knowledge in the sciences.  However, a number of studies have demonstrated that feeling plays an important role in cognition, and that the loss of emotions in human beings leads to poor decision-making and an inability to cope effectively with the real world.  Emotionless human beings would in fact make poor scientists.

Professor of Neuroscience Antonio Damasio, in his book Descartes’ Error: Emotion, Reason, and the Human Brain, describes several cases of human beings who lost the part of their brain responsible for emotions, either because of an accident or a brain tumor.  These persons, some of whom were previously known as shrewd and smart businessmen, experienced a serious decline in their competency after damage took place to the emotional center of their brains.  They lost their capacity to make good decisions, to get along with other people, to manage their time, or to plan for the future.  In every other respect, these persons retained their cognitive abilities — their IQs remained above normal and their personality tests resulted in normal scores.  The only thing missing was their capacity to have emotions.  Yet this made a huge difference.  Damasio writes of one subject, “Elliot”:

Consider the beginning of his day: He needed prompting to get started in the morning and prepare to go to work.  Once at work he was unable to manage his time properly; he could not be trusted with a schedule.  When the job called for interrupting an activity and turning to another, he might persist nonetheless, seemingly losing sight of his main goal.  Or he might interrupt the activity he had engaged, to turn to something he found more captivating at that particular moment.  Imagine a task involving reading and classifying documents of a given client.  Elliot would read and fully understand the significance of the material, and he certainly knew how to sort out the documents according to the similarity or disparity of their content.  The problem was that he was likely, all of a sudden, to turn from the sorting task he had initiated to reading one of those papers, carefully and intelligently, and to spend an entire day doing so.  Or he might spend a whole afternoon deliberating on which principle of categorization should be applied: Should it be date, size of document, pertinence to the case, or another?   The flow of work was stopped. (p. 36)

Why did the loss of emotion, which might be expected to improve decision-making by making these persons coldly objective, result in poor decision-making instead?  It might be expected that the loss of emotion would lead to failures in social relationships.  So why were these people unable to even effectively advance their self-interest?  According to Damasio, without emotions, these persons were unable to value, and without value, decision-making became hopelessly capricious or paralyzed, even with normal or above-normal IQs.  Damasio noted, “the cold-bloodedness of Elliot’s reasoning prevented him from assigning different values to different options, and made his decision-making landscape hopelessly flat.” (p. 51)

It is true that emotional swings can lead to very bad decisions — anger, depression, anxiety, even excessive joy — can lead to bad choices.  But the solution to this problem, according to Damasio, is to achieve the right emotional disposition, not to erase the emotions altogether.  One has to find the right balance or harmony of emotions.

Damasio describes one patient who, after suffering damage to the emotional center of his brain, gained one significant advantage: while driving to his appointment on icy roads, he was able to remain calm and drive safely, while other drivers had a tendency to panic when they skidded, leading to accidents.  However, Damasio notes the downside:

I was discussing with the same patient when his next visit to the laboratory should take place.  I suggested two alternative dates, both in the coming month and just a few days apart from each other.  The patient pulled out his appointment book and began consulting the calendar.  The behavior that ensued, which was witnessed by several investigators, was remarkable.  For the better part of a half-hour, the patient enumerated reasons for and against each of the two dates . . . Just as calmly as he had driven over the ice, and recounted that episode, he was now walking us through a tiresome cost-benefit analysis, an endless outlining and fruitless comparison of options and possible consequences.  It took enormous discipline to listen to all of this without pounding on the table and telling him to stop, but we finally did tell him, quietly, that he should come on the second of the alternative dates.  His response was equally calm and prompt.  He simply said, ‘That’s fine.’ (pp. 193-94)

So how would it affect scientific progress if all scientists were like the subjects Damasio studied, free of emotion, and therefore, hypothetically capable of perfect objectivity?  Well it seems likely that science would advance very slowly, at best, or perhaps not at all.  After all, the same tools for effective decision-making in everyday life are needed for the scientific enterprise as well.

As the French mathematician and scientist Henri Poincare noted, every time we look at the world, we encounter an immense mass of unorganized facts.  We don’t have the time to thoroughly examine all those facts and we don’t have the time to pursue experiments on all the hypotheses that may pop into our minds.  We have to use our intuition and best judgment to select the most important facts and develop the best hypotheses (Foundations of Science, pp. 127-30, 390-91).  An emotionless scientist would not only be unable to sustain the social interaction that science requires, he or she would be unable to develop a research plan, manage his or her time, or stick to a research plan.  An ability to perceive value is fundamental to the scientific enterprise, and emotions are needed to properly perceive and act on the right values.

The Role of Imagination in Science, Part 3

In previous posts (here and here), I argued that mathematics was a product of the human imagination, and that the test of mathematical creations was not how real they were but how useful or valuable they were.

Recently, Russian mathematician Edward Frenkel, in an interview in the Economist magazine, argued the contrary case.  According to Frenkel,

[M]athematical concepts and ideas exist objectively, outside of the physical world and outside of the world of consciousness.  We mathematicians discover them and are able to connect to this hidden reality through our consciousness.  If Leo Tolstoy had not lived we would never had known Anna Karenina.  There is no reason to believe that another author would have written that same novel.  However, if Pythagoras had not lived, someone else would have discovered exactly the same Pythagoras theorem.

Dr. Frenkel goes on to note that mathematical concepts don’t always match to physical reality — Euclidean geometry represents an idealized three-dimensional flat space, whereas our actual universe has curved space.  Nevertheless, mathematical concepts must have an objective reality because “these concepts transcend any specific individual.”

One problem with this argument is the implicit assumption that the human imagination is wholly individualistic and arbitrary, and that if multiple people come up with the same idea, this must demonstrate that the idea exists objectively outside the human mind.  I don’t think this assumption is valid.  It’s perfectly possible for the same idea to be invented by multiple people independently.  Surely if Thomas Edison never lived, someone else would have invented the light bulb.   Does that mean that the light bulb is not a true creation of the imagination, that it was not invented but always existed “objectively” before Edison came along and “discovered” it?  I don’t think so.  Likewise with modern modes of ground transportation, air transportation, manufacturing technology, etc.  They’re all apt to be imagined and invented by multiple people working independently; it’s just that laws on copyright and patent only recognize the first person to file.

It’s true that in other fields of human knowledge, such as literature, one is more likely to find creations that are truly unique.  Yes, Anna Karenina is not likely to be written by someone else in the absence of Tolstoy.  However, even in literature, there are themes that are universal; character names and specific plot developments may vary, but many stories are variations on the same theme.  Consider the following story: two characters from different social groups meet and fall in love; the two social groups are antagonistic toward each other and would disapprove of the love; the two lovers meet secretly, but are eventually discovered; one or both lovers die tragically.  Is this not the basic plot of multiple stories, plays, operas, and musicals going back two thousand years?

Dr. Frenkel does admit that not all mathematical concepts correspond to physical reality.  But if there is not a correspondence to something in physical reality, what does it mean to say that a mathematical concept exists objectively?  How do we prove something exists objectively if it is not in physical reality?

If one looks at the history of mathematics, there is an intriguing pattern in which the earliest mathematical symbols do indeed seem to point to or correspond to objects in physical reality; but as time went on and mathematics advanced, mathematical concepts became more and more creative and distant from physical reality.  These later mathematical concepts were controversial among mathematicians at first, but later became widely adopted, not because someone proved they existed, but because the concepts seemed to be useful in solving problems that could not be solved any other way.

The earliest mathematical concepts were the “natural numbers,” the numbers we use for counting (1, 2, 3 . . .).  Simple operations were derived from these natural numbers.  If I have two apples and add three apples, I end up with five apples.  However, the number zero was initially controversial — how can nothing be represented by something?  The ancient Greeks and Romans, for all of their impressive accomplishments, did not use zero, and the number zero was not adopted in Europe until the Middle Ages.

Negative numbers were also controversial at first.  How can one have “negative two apples” or a negative quantity of anything?  However, it became clear that negative numbers were indeed useful conceptually.  If I have zero apples and borrow two apples from a neighbor, according to my mental accounting book, I do indeed have “negative two apples,” because I owe two apples to my neighbor.  It is an accounting fiction, but it is a useful and valuable fiction.  Negative numbers were invented in ancient China and India, but were rejected by Western mathematicians and were not widely accepted in the West until the eighteenth century.

The set of numbers known explicitly as “imaginary numbers” was even more controversial, since it involved a quantity which, when squared, results in a negative number.  Since there is no known number that allows such an operation, the imaginary numbers were initially derided.  However, imaginary numbers proved to be such a useful conceptual tool in solving certain problems, they gradually became accepted.   Imaginary numbers have been used to solve problems in electric current, quantum physics, and envisioning rotations in three dimensions.

Professor Stephen Hawking has used imaginary numbers in his own work on understanding the origins of the universe, employing “imaginary time” in order to explore what it might be like for the universe to be finite in time and yet have no real boundary or “beginning.”  The potential value of such a theory in explaining the origins of the universe leads Professor Hawking to state the following:

This might suggest that the so-called imaginary time is really the real time, and that what we call real time is just a figment of our imaginations.  In real time, the universe has a beginning and an end at singularities that form a boundary to space-time and at which the laws of science break down.  But in imaginary time, there are no singularities or boundaries.  So maybe what we call imaginary time is really more basic, and what we call real is just an idea that we invent to help us describe what we think the universe is like.  But according to the approach I described in Chapter 1, a scientific theory is just a mathematical model we make to describe our observations: it exists only in our minds.  So it is meaningless to ask: which is real, “real” or “imaginary” time?  It is simply a matter of which is the more useful description.  (A Brief History of Time, p. 144.)

If you have trouble understanding this passage, you are not alone.  I have a hard enough time understanding imaginary numbers, let alone imaginary time.  The main point that I wish to underline is that even the best theoretical physicists don’t bother trying to prove that their conceptual tools are objectively real; the only test of a conceptual tool is if it is useful.

As a final example, let us consider one of the most intriguing of imaginary mathematical objects, the “hypercube.”  A hypercube is a cube that extends into additional dimensions, beyond the three spatial dimensions of an ordinary cube.  (Time is usually referred to as the “fourth dimension,” but in this case we are dealing strictly with spatial dimensions.)  A hypercube can be imagined in four dimensions, five dimensions, eight dimensions, twelve dimensions — in fact, there is no limit to the number of dimensions a hypercube can have, though the hypercube gets increasingly complex and eventually impossible to visualize as the number of dimensions increases.

Does a hypercube correspond to anything in physical reality?  Probably not.  While there are theories in physics that posit five, eight, ten, or even twenty-six spatial dimensions, these theories also posit that the additional spatial dimensions beyond our third dimension are curved up in very, very small spaces.  How small?  A million million million million millionth of an inch, according to Stephen Hawking (A Brief History of Time, p. 179).  So as a practical matter, hypercubes could exist only on the most minute scale.  And that’s probably a good thing, as Stephen Hawking points out, because in a universe with four fully-sized spatial dimensions, gravitational forces would become so sensitive to minor disturbances that planetary systems, stars, and even atoms would fly apart or collapse (pp. 180-81).

Dr. Frenkel would admit that hypercubes may not correspond to anything in physical reality.  So how do hypercubes exist?  Note that there is no limit to how many dimensions a hypercube can have.  Does it make sense to say that the hypercube consisting of exactly 32,458 dimensions exists objectively out there somewhere, waiting for someone to discover it?   Or does it make more sense to argue that the hypercube is an invention of the human imagination, and can have as many dimensions as can be imagined?  I’m inclined to the latter view.

Many scientists insist that mathematical objects must exist out there somewhere because they’ve been taught that a good scientist must be objective and dedicate him or herself to the discovery of things that exist independently of the human mind.  But there’re too many mathematical ideas that are clearly products of the human mind, and they’re too useful to abandon merely because they are products of the mind.

Omnipotence and Human Freedom

Prayson Daniel writes about Christian author C. S. Lewis’s attempt to deal with the problem of evil here and here.  Lewis, who suffered tragic loss at an early age, became an atheist when young, but later converted to Christianity.  Lewis directly addressed the challenge of the atheists’ argument — why would an omnipotent and benevolent God allow evil to exist? — in his books The Problem of Pain and Mere Christianity.

Central to Lewis’s argument is the notion that the freedom to do good or evil is essential to being human.  If human beings were always compelled to do good, they would not be free, and thus would be unable to attain genuine happiness.

One way to illustrate the necessity of freedom is to imagine a world in which human beings were unable to commit evil — no violence, no stealing, no lying, no cheating, no betrayal.  At first, such a world might appear to be a paradise.  But the price would be this: essentially we would all be nothing but robots.  Without the ability to commit evil, doing good would have no meaning.  We would do good simply because we were programmed or compelled to do nothing but good.  There would be no choices because there would be no alternatives.  Love and altruism would have no meaning because it wouldn’t be freely chosen.

Let us imagine a slightly different world, a world in which freedom is allowed, but God always intervenes to reward the good and punish the guilty.  No good people ever suffer.  Earthquakes, fires, disease, and other natural disasters injure and kill only those who are guilty of evil.  Those who do good are rewarded with good health, riches, and happiness.  This world seems only slightly better than the world in which we are robots.  In this second world, we are mere zoo animals or pets.  We would be trained by our master to expect treats when we behave and punishment when we misbehave.  Again, doing good would have no meaning in this world — we would simply be advancing our self-interest, under constant, inescapable surveillance and threat of punishment.  In some ways, life in this world would be almost as regimented and monotonous as in the world in which we are compelled to do good.

For these reasons, I find the “free will” argument for the existence of evil largely persuasive when it comes to explaining the existence of evil committed by human beings.  I can even see God as having so much respect for our freedom that he would stand aside even in the face of an enormous crime such as genocide.

However, I think that the free will argument is less persuasive when it comes to accounting for evils committed against human beings by natural forces — earthquakes, fires, floods, disease, etc.  Natural forces don’t have free will in the same sense that human beings do, so why doesn’t God intervene when natural forces threaten life?  Granted, it would be asking too much to expect that natural disasters happen only to the guilty.  But the evils resulting from natural forces seem to be too frequent, too immense, and too random to be attributed to the necessity of freedom.  Why does freedom require the occasional suffering and death of even small children?  It’s hard to believe that small children have even had enough time to live in order to exercise their free will in a meaningful way.

Overall, the scale of divine indifference in cases of natural disaster is too great for me to think that it is part of a larger gift of free will.  For this reason, I am inclined to think that there are limits on God’s power to make a perfect world, even if the freedom accorded to human beings is indeed a gift of God.

Miracles

The Oxford English Dictionary defines a “miracle” as “a marvelous event occurring within human experience, which cannot have been brought about by any human power or by the operation of any natural agency, and must therefore be ascribed to the special intervention of the Deity or some supernatural being.”  (OED, 1989)  This meaning reflects how the word “miracle” has been commonly used in the English language for hundreds of years.

Since a miracle, by definition, involves a suspension of physical laws in nature by some supernatural entity, the question of whether miracles take place, or have ever taken place, is an important one.  Most adherents of religion — any religion — are inclined to believe in miracles; skeptics argue that there is no evidence to support the existence of miracles.

I believe skeptics are correct that the evidence for a supernatural agency occasionally suspending the normal processes and laws of nature is very weak or nonexistent.  Scientists have been studying nature for hundreds of years; when an observed event does not appear to follow physical laws, it usually turns out that the law is imperfectly understood and needs to be modified, or there is some other physical law that needs to be taken into account.  Scientists have not found evidence of a supernatural being behind observational anomalies.  This is not to say that everything in the universe is deterministic and can be reduced to physical laws.  Most scientists agree that there is room for indeterminacy in the universe, with elements of freedom and chance.  But this indeterminacy does not seem to correspond to what people have claimed as miracles.

However, I would like to make the case that the way we think about miracles is all wrong, that our current conception of what counts as a miracle is based on a mistaken prejudice in favor of events that we are unaccustomed to.

According to the Oxford English Dictionary, the word “miracle” is derived from the Latin word “miraculum,” which is an “object of wonder.” (OED 1989)  A Latin dictionary similarly defines “miraculum” as “a wonderful, strange, or marvelous thing, a wonder, marvel, miracle.” (Charlton T. Lewis, A Latin Dictionary, 1958)  There is nothing in the original Latin conception of miraculum that requires a belief in the suspension of physical laws.  Miraculum is simply about wonder.

Wonder as an activity is an intellectual exercise, but it is also an emotional disposition.  We wonder about the improbable nature of our existence, we wonder about the vastness of the universe, we wonder about the enormous complexity and diversity of life.  From wonder often comes other emotional dispositions: astonishment, puzzlement, joy, and gratitude.

The problem is that in our humdrum, everyday lives, it is easy to lose wonder.  We become accustomed to existence through repeated exposure to the same events happening over and over, and we no longer wonder.  The satirical newspaper The Onion expresses this disposition well: “Miracle Of Birth Occurs For 83 Billionth Time,” reads one headline.

Is it really the case, though, that a wondrous event ceases to be wondrous because it occurs frequently, regularly, and appears to be guided by causal laws?  The birth of a human being begins with blueprints provided by an egg cell and sperm cell; over the course of nine months, over 100,000,000,000,000,000,000,000,000 atoms of oxygen, carbon, hydrogen, nitrogen and other elements gradually come together in the right place at the right time to form the extremely intricate arrangement known as a human being.  If anything is a miraculum, or wonder, it is this event.  But because it happens so often, we stop noticing.  Stories about crying statues, or people seeing the heart of Jesus in a communion wafer, or the face of Jesus in a sock get our attention and are hailed as miracles because these alleged events are unusual.  But if you think about it, these so-called miracles are pretty insignificant in comparison to human birth.  And if crying statues were a frequent event, people would gradually become accustomed to it; after a while, they would stop caring, and start looking around for something new to wonder about it.

What a paradox.  We are surrounded by genuine miracles every day, but we don’t notice them.  So we grasp at the most trivial coincidences and hoaxes in order to restore our sense of wonder, when what we should be doing is not taking so many wonders for granted.

Misunderstanding Manicheanism

A lot of religions and philosophies are misunderstood to varying degrees, but if I had to pick one religion or philosophy as being the most misunderstood it would be Manicheanism.  First propounded by the prophet Mani (or Manes) in Persia in the third century C.E., this religion viewed the universe as consisting of a battle between the forces of light and the forces of darkness.  God was good, but was not all-powerful, which is why there was evil in the world.  Human beings and other material things were a mixture of the forces of light and forces of darkness; the task of human beings was to separate the light from the dark by shunning evil and doing good deeds.

In modern day America, the term “Manichean” is used disparagingly, as a way of attacking those who see political or social conflict as being wars of good vs. evil.  A Manichean view, it is argued or implied, depicts the self as purely good, opponents as demonic, and compromise as virtually impossible.  A recent example of this is a column by George Will about the negotiations over Iran’s nuclear program.  Will describes Iran as being “frightening in its motives (measured by its rhetoric) and barbaric in its behavior,” and quotes author Kenneth Pollack, who notes that Manicheanism was a Persian (Iranian) religion that “conceived of the world as being divided into good and evil.”  Of course, Manicheanism no longer has a significant presence in modern-day Iran, but you get the point — those Persians have always been simple-minded fanatics.

Let’s correct this major misconception right now: Manicheanism does NOT identify any particular tribe, group, religion, or nation as being purely good or purely evil.  Manicheanism sees good and evil as cosmological forces that are mixed in varying degrees in the material things we see all around us.  Humanity, in this view, consists of forces of light (good) mixed with darkness ; the task of humanity is to seek and release this inner light, not to label other human beings as evil and do battle with them.

If anything, Manicheanism was one of the most cosmopolitan and tolerant religions in history.  Manicheanism aimed to be a universal religion and incorporated elements of Christianity, Zoroastrianism, Buddhism, and Hinduism. The most dedicated adherents of Manicheanism were required to adopt a life of nonviolence, including vegetarianism.  For their trouble, Manicheans were persecuted and killed by the Christian, Buddhist, and Muslim societies in which they lived.

The Manichean view of human beings as being a mixture of good and evil is really a mainstream view shared by virtually all religions.  Alexander Solzhenitsyn has described this insight well:

It was granted to me to carry away from my prison years on my bent back, which nearly broke beneath its load, this essential experience: how a human being becomes evil and how good.  In the intoxication of youthful successes I had  felt myself to be infallible, and I was therefore cruel.  In the surfeit of power I was a murderer and an oppressor.  In my most evil moments I was convinced that I was doing good, and I was well supplied with systematic arguments.  It was only  when I lay there on rotting prison straw that I sensed within myself the first stirrings of good.  Gradually it was disclosed to me that the line separating good and evil passes not through states, nor between classes, nor between political parties either, but right through every human heart, and  through all human hearts.  This line shifts.  Inside us, it oscillates with the years.   Even within hearts overwhelmed by evil, one small bridgehead of good is retained; and even in the best of all hearts, there remains a small corner of evil.

Since then I have come to understand the truth of all the religions of the world: they struggle with the evil inside a human being  (inside every human being).  It is impossible to expel evil from the world in its entirety, but it is possible to constrict it within each person.

This is what Manicheanism teaches: the battle between good and evil lies within all humans, not between purely good humans and purely evil humans.

Objectivity is Not Scientific

It is a common perception that objectivity is a virtue in the pursuit of knowledge, that we need to know things as they really are, independent of our mental conceptions and interpretations.  It is also a common perception that science is the form of knowledge that is the most objective, and that is why scientific knowledge makes the most progress.

Yet the principle of objectivity immediately runs into problems in the most famous scientific theory, Einstein’s theory of relativity.  According to relativity theory, there is no objective way to measure objects in space and time — these measures are always relative to observers depending on what velocity the objects and observers are travelling, and observers often end up with different measures for the same object as a result.  For example, objects travelling at a very high speed will appear to be shorter in length to outside observers that are parallel to the path of the object, a phenomenon known as length contraction.  In addition, time will move more slowly for an observer travelling at high speed than an observer travelling at a low speed.  This phenomenon is illustrated in the “twin paradox” — given a pair of twins, if one sets off in a high speed rocket, while the other stays on earth, the twin on the rocket will have aged more slowly than the twin on earth.  Finally, the sequence of two spatially-separated events, say Event A and Event B, will differ according to the position and velocity of the observer.  Some observers may see Event A occurring before Event B, others may see Event B occurring before Event A, and others will see the two events as simultaneous.  There is no objectively true sequence of events.

The theory of relativity does not say that everything is relative.  The speed of light, for example, is the same for all observers, whether they are moving at a fast speed toward a beam of light or away from a beam of light.  In fact, it was the absolute nature of light speed for all moving observers that led Einstein to conclude that time itself must be different for different observers.  In addition, for any two events that are causally-connected, the events must take place in the same sequence for all observers.  In other words, if Event A causes Event B, Event A must precede Event B for all observers.  So relativity theory sees some phenomena as different for different observers and others as the same for different observers.

Finally, the meaning of relativity in science is not that one person’s opinion is just as valid as anyone else’s.  Observers within the same frame of reference (say, multiple observers travelling together in the same vehicle) should agree on measurements of length and time for an outside object even if observers from other reference frames have different results.  If observers within the same vehicle don’t agree, then something is wrong — perhaps someone is misperceiving, or misinterpreting, or something else is wrong.

Nevertheless, if one accepts the theory of relativity, and this theory has been accepted by scientists for many decades now, one has to accept the fact that there is no objective measure of objects in space and time — it is entirely observer-dependent.  So why do many cling to the notion of objectivity as a principle of knowledge?

Historically, the goal of objectivity was proposed as a way to solve the problem of subjective error.  Individual subjects have imperfect perceptions and interpretations.  What they see and claim is fallible.  The principle of objectivity tries to overcome this problem by proposing that we need to evaluate objects as they are in themselves, in the absence of human mind.  The problem with this principle is that we can’t really step outside of our bodies and minds and evaluate an object.

So how do we overcome the problem of subjective error?  The solution is not to abandon mind, but to supplement it, by communicating with other minds, checking for individual error by seeing if others are getting different results, engaging in dialogue, and attempting to come to a consensus.  Observations and experiments are repeated many times by many different people before conclusions are established.  In this view, knowledge advances by using the combined power of thousands and thousands of minds, past and present.  It is the only way to ameliorate the problem of an incorrect relationship between subject and object and making that relationship better.

In the end, all knowledge, including scientific knowledge, is essentially and unalterably about the relationship between subjects and objects — you cannot find true knowledge by splitting objects from subjects any more than you can split H2O into its individual atoms of hydrogen and oxygen and expect to find water in the component parts.

How Powerful is God?

In a previous post, we discussed the non-omnipotent God of process theology as a possible explanation for the twin facts that the universe appears to be fine-tuned for life and yet evolution is extremely slow and life precarious.  The problem with process theology, however, is that God appears to be extremely weak.  Is the concept of a non-omnipotent God worthwhile?

One response to this criticism is that portraying God as weak simply because the universe was not instantaneously and perfectly constructed for life is to misconstrue what the meaning of “weak” is.  The mere fact that the universe, consisting of a least 10,000,000,000,000,000,000,000 stars, was created out of nothingness and has lasted over 13 billion years does not seem to indicate weakness.

Another response would be that the very gradual incrementalism of evolution may be a necessary component of a fantastically complex system that cannot tolerate errors that would threaten to destroy the system.  That is, the various physical laws and numerical constants that underlie the order of the universe exist in such an intricate relationship that a violation of a law in one particular case or sudden change in one of the constants would cause the universe to self-destruct, in the same way that a computer program may crash if a single line of code is incorrect or is incompatible with the other lines of code.

In fact, a number of physicists have explicitly described the universe as a type of computer, in the sense that the order of the universe is based on the processing of information in the form of the physical laws and constants.  Of course, the chief difference between the universe and a computer is that we can live with a computer crashing occasionally — we cannot live with the universe crashing even once.  Thus the fact that the universe, while not immortal, never seems to crash, indicates that gradual evolution may be necessary.  Perhaps instability on the micro level of the universe (an asteroid occasionally crashing into a planet with life) is the price to be paid for stability on the macro level.

Alternatively, we can conceptualize the order behind the universe as a type of mind, “mind” being defined broadly as any system for processing information.  We can posit three types of mind in the historical development of the universe: cosmic mind (God), biological mind (human/animal mind), and electronic mind (computer).

Cosmic mind can be thought of as pure spirit, or pure information, if you will.  Cosmic mind can create matter and a stable foundation for the universe, but once matter is created, the influence of spirit on matter is relatively weak.  That is, there is a division between the world of spirit and the world of matter that is difficult to bridge.  Biological mind does not know everything cosmic mind does and it is limited in time and space, but biological mind can more efficiently act on matter, since it is part of the world of matter.  Electronic mind (computer) is a creation of biological mind but processes larger amounts of information more quickly, assisting biological mind in the manipulation of matter.

As a result, the evolution of the universe began very slowly, but has recently accelerated as a result of incremental improvements to mind.  According to Stephen Hawking,

The process of biological evolution was very slow at first. It took two and a half billion years, to evolve from the earliest cells to multi-cell animals, and another billion years to evolve through fish and reptiles, to mammals. But then evolution seemed to have speeded up. It only took about a hundred million years, to develop from the early mammals to us. . . . [W]ith the human race, evolution reached a critical stage, comparable in importance with the development of DNA. This was the development of language, and particularly written language. It meant that information can be passed on, from generation to generation, other than genetically, through DNA. . . .  [W]e are now entering a new phase, of what might be called, self designed evolution, in which we will be able to change and improve our DNA. . . . If this race manages to redesign itself, to reduce or eliminate the risk of self-destruction, it will probably spread out, and colonise other planets and stars.  (“Life in the Universe“)

According to physicist Freeman Dyson (Disturbing the Universe), even if interstellar spacecraft achieve only one percent of the speed of light, a speed within the possibility of present-day technology, the Milky Way galaxy could be colonized end-to-end in ten million years –  a very long time from an individual human’s perspective, but a remarkably short time in the history of evolution, considering it took 2.5 billion years simply to make the transition from single-celled life forms to multi-celled creatures.

So cosmic mind can be very powerful in the long run, but patience is required!