The Influence of Christianity on Western Culture, Part Two: Religion and Culture

In my previous post, I addressed the debate between Christians and secular rationalists on the origins of the modern Western idea of human rights, with Christians attributing these rights to Christianity, whereas secular rationalists credited human reason. While acknowledging the crimes committed by the Christian churches in history, I also expressed skepticism about the ability of reason alone to provide a firm foundation for human rights.

In the second part of this essay, I would like to explore the idea that religion has a deep, partly subconscious, influence on culture and that this influence maintains itself even when people stop going to religious services, stop reading religious texts, and even stop believing in God. (Note: Much of what I am about to say next has been inspired by the works of the Christian theologian Reinhold Niebuhr, who has covered this issue in his books, The Nature and Destiny of Man and The Self and the Dramas of History.)

___________________________

What exactly is religion, and why does it have a deep impact on our culture and thinking? Nearly all of today’s major existing religions date back from 1300 to 4000 years ago. In some respects, these religions have changed, but in most of their fundamentals, they have not. As such, there are unattractive elements in all of these religions, originating in primitive beliefs held at a time when there was hardly any truly scientific inquiry. As a guide to history, religious texts from this era are extremely unreliable; as a guide to scientific knowledge of the natural world, these religions are close to useless. So why, then, does religion continue to exercise a hold on the minds of human beings today?

I maintain that religion should be thought of primarily as a Theory of the Good. It is a way of thinking that does not (necessarily) result in truthful journalism and history, does not create accurate theories of causation, and ultimately, cares less about what things are really like and more about what things should be like.

As Robert Pirsig has noted, all life forms seek the Good, if only for themselves. They search for food, shelter, warmth, and opportunities for reproduction. More advanced life forms pursue all these and also may seek long-term companionship, a better location, and a more varied diet. If life forms can fly, they may choose to fly for the joy of it; if they can run fast, they may run for the joy of it.

Human beings have all these qualities, but also one more: with our minds, we can imagine an infinite variety of goods in infinite amounts; this is the source of our endless desires. In addition, our more advanced brains also give us the ability to imagine broadened sympathies beyond immediate family and friends, to nations and to humankind as a whole; this is the source of civilization. Finally, we also gain a curiosity about the origin of the world and ourselves and what our ultimate destiny is, or should be; this is the source of myths and faith. It is these imagined, transcendent goods that are the material for religion. And as a religion develops, it creates the basic concepts and categories by which we interpret the world.

There are many similarities among all the world’s religions in what is designated good and what is designated evil. But there are important differences as well, that have resulted in cultural clashes, sometimes leading to mild disagreements and sometimes escalating into the most vicious of wars. For the purpose of this essay, I am going to avoid the similarities among religions and discuss the differences.

Warning: Adequately covering all of the world’s major religions in a short essay is a hazardous enterprise. My depth of knowledge on this subject is not that great, and I will have to grossly simplify in many cases. I merely ask the reader for tolerance and patience; if you have a criticism, I welcome comments.

The most important difference between the major religions revolves around what is considered to be the highest good. This highest good seems to constitute a fundamental dividing line between the religions that is difficult to bridge. To make it simple, let’s summarize the highest good of each religion in one word:

Judaism – Covenant

Christianity – Love

Islam – Submission (to God)

Buddhism – Nirvana

Hinduism – Moksha

Jainism – Nonviolence (ahimsa)

Confucianism – Ren (Humanity)

Taoism – Wu Wei (inaction)

How does this perception of the highest good affect the nature of a religion?

Judaism: With only about 15 million adherents today, Judaism might appear to be a minor religion — but in fact, it is widely known throughout the world because of its huge influence on Christianity and Islam, which have billions of followers and have borrowed greatly from Judaism. Fundamental to Judaism is the idea of a covenant between God and His people, in which His people would follow the commandments of God, and God in return would bless His people with protection and abundance. This sort of faith faced many challenges over the centuries, as natural disasters and defeat in war were not always closely correlated with moral failings or a breach of the covenant. Nevertheless, the idea that moral behavior brings blessings has sustained the Jews and made them successful in many occupations for thousands of years. The chief disadvantage of Judaism has been its exclusive ties to a particular nation/ethnic group, which has limited its appeal to the rest of the world.

Christianity: Originating in Judaism, Christianity made a decisive break with Judaism under Jesus, and later, St. Paul. This break consisted primarily in recognizing that the laws of the Jews were somehow inadequate in making people good, because it was possible for someone to follow the letter of the law while remaining a very flawed or even terrible human being. Jesus’ denunciations of legalists and hypocrites in the New Testament are frequent and scathing. The way forward out of this, according to Jesus, was to simply love others, without making distinctions of rank, ethnicity, or religion. This original message of Jesus, and his self-sacrifice, inspired many Jews and non-Jews and led to the gradual, but steadily accelerating, growth of this minor sect. The chief flaw in Christianity became apparent hundreds of years after the crucifixion, when this minority sect became socially and politically powerful, and Christians used their new power to violently oppress others. This stark hypocrisy has discredited Christianity in the eyes of many.

Islam: A relatively young monotheistic religion, Islam grew out of the Arabian Peninsula in the seventh century AD. It’s prophet, Muhammad, clearly borrowed from Judaism and Christianity, but rejected the exclusivity of Judaism and the status of Jesus as the son of God. The word “Islam” means submission, but contrary to some commentators, it means submission to God, not to Islam or Muslims, which would be blasphemous. The requirements of Islam are fairly rigorous, requiring prayers five times a day; there is also an extensive body of Islamic law that is relatively strict, though implemented unevenly in Islamic countries today, with Iran and Saudi Arabia being among the strictest. There is no denying that the birth of Islam sparked the growth of a great empire that supported an advanced civilization. In the words of Bernard Lewis, “For many centuries the world of Islam was in the forefront of human civilization and achievement.” (What Went Wrong? The Clash Between Islam and Modernity in the Middle East, p. 3) Today, the Islamic world lags behind the rest of the world in many respects, perhaps because its strict social rules and tradition inhibit innovation in the modern world.

Confucianism. Founded by the scholar and government official Confucius in the 6th century B.C., Confucianism can be regarded as a system of morals based on the concept of Ren, or humanity. Confucius emphasized duty to the family, honesty in government, and espoused a version of the Golden Rule. There is a great deal of debate over whether Confucianism is actually a religion or mainly a philosophy and system of ethics. In fact, Confucius was a practical man, who did not discuss God or the afterlife, and never proclaimed an ability to perform miracles. But his impact on Chinese civilization, and Asian civilization generally, was tremendous, and the values of Confucius are deeply embedded in Chinese and other Asian societies to this day.

Buddhism. Founded by Gautama Buddha in the 6th century B.C., Buddhism addressed the problem of human suffering. In the view of the Buddha, our suffering arises from desire; because we cannot always get what we want, and what we want is never permanent, the human condition is one of perpetual dissatisfaction. When we die, we a born into another body, to suffer again. The Buddha argued that this cycle of suffering and rebirth could be ended by following the “eightfold path” – right understanding, right thought, right speech, right conduct, right livelihood, right effort, right mindfulness, and right concentration. Following this path could lead one to nirvana, which is the extinguishing of the self and the end of the cycle of rebirth and suffering. While not entirely pacifistic, there are strong elements of pacifism in Buddhism and a large number of Buddhists are vegetarian. Many non-Buddhists, however, would dispute the premise that life is suffering and that the dissolving of the self is a solution to suffering.

Hinduism. The third largest religion in the world, Hinduism is also considered to be the world’s oldest religion, with roots stretching back more than 4000 years. However, Hinduism also consists of different schools with diverse beliefs; there are multiple written texts in the Hindu tradition, but no single unifying text, such as the Bible or the Quran. A Hindu can believe in multiple gods or one God, and the Hindu conception of God/s can also vary. There is even a Hindu school of thought that is atheistic; this school goes back thousands of years. There is a strong tradition of nonviolence (ahimsa) in Hinduism, which obviously inspired Gandhi’s campaign of nonviolent resistance against British colonial rule in the early twentieth century. The chief goal of Hindu practices is moksha, or liberation from the cycle of birth, death, and rebirth — roughly similar to the concept of nirvana.

Jainism. Originating in India around 2500 years ago, the Jain religion posits ahimsa, or nonviolence, as the highest good and goal of life. Jains practice a strict vegetarianism, which even extends to certain dairy products which may harm animals and any vegetable that may harm insects if harvested. The other principles of Jainism include anekāntavāda (non-absolutism) and aparigraha (non-attachment). The principle of non-absolutism recognizes that the truth is “many-sided” and impossible to fully express in language, while non-attachment refers to the necessity of avoiding the pursuit of property, taking and keeping only what is necessary.

Taosim. Developed in the 4th century B.C., Taoism is one of the major religions in China, along with Confucianism and Buddhism. “Tao” can be translated as “the Way,” or “the One, which is natural, spontaneous, eternal, nameless, and indescribable. . . the beginning of all things and the way in which all things pursue their course.” Pursuit of the “Way” is not meant to be difficult or arduous or require sacrifice, as in other religions. Rather, the follower must practice wu wei, or effortless action. The idea is that one must act in accord with the cosmos, not fight or struggle against it. Taoism values naturalness, spontaneity, and detachment from desires.

Now, all these religions, including many I have not listed, have value. The monotheism of Judaism and its strict moralism was a stark contrast to the ancient pagan religions, which saw the gods as conflictual, cruel, and prone to immoral behavior. The moral disciplines of Islam invigorated a culture and created a civilization more advanced than the Christian Europe of the Middle Ages. Buddhism, Hinduism, and Jainism have placed strong emphasis on overcoming self-centeredness and rejecting violence. Confucianism has instilled the values of respect for elders, love of family, and love of learning throughout East Asia. Taoism’s emphasis on harmony puts a break on human tendencies to dominate and control.

What I would like to focus on now are the particular contributions Christianity has made to Western civilization and how Christianity has shaped the culture of the West in ways we may not even recognize, contrasting the influence of Christianity with the influence of the other major religions.

__________________________

Christianity has provided four main concepts that have shaped Western culture, concepts that retain their influence today, even among atheists.

(1) The idea of a transcendent good, above and beyond nature and society.

(2) An emphasis on the worth of the individual, above society, government, and nature.

(3) Separation of religion and government.

(4) The idea of a meaningful history, that is, an unfolding story that ends with a conclusion, not a series of random events or cycles.

Let’s examine each of these in detail.

(1) Transcendent Good. I have written in some detail about the concept of transcendence elsewhere. In brief, transcendence refers to “the action of transcending, surmounting, or rising above . . . excelling.” To seek the transcendent is to aspire to something higher than reality. The difficulty with transcendence is that it’s not easily subject to empirical examination:

[B]ecause it seems to refer to a striving for an ideal or a goal that goes above and beyond an observed reality, transcendence has something of an unreal quality. It is easy to see that rocks and plants and stars and animals and humans exist. But the transcendent cannot be directly seen, and one cannot prove the transcendent exists. It is always beyond our reach.

Transcendent religions differ from pantheistic and panentheistic religions by insisting on the greater value or goal of an ideal state of being above and beyond the reality we experience. Since this ideal state is not subject to empirical proof, transcendent religions appear irrational and superstitious to many. Moreover, the dreamy idealism of transcendent religions often results in a fanaticism that leads to intolerance and religious wars. For these reasons, philosophers and scientists in the West usually prefer pantheistic interpretations of God (see Spinoza and Einstein).

The religions of India — Hinduism, Buddhism, Jainism — have strong tendencies toward pantheism or panentheism, in which all existence is bound by a universal spirit, and our duty is to become one with this spirit. There is not a sharp distinction between this universal spirit and the universe or reality itself.

In China, Taoism rejects a personal God, while Confucianism is regarded by most as a philosophy or moral code than a religion. (The rational pragmatism of Chinese religion is probably why China had no major religious wars until a Chinese Christian in the 19th century led a rebellion on behalf of his “Heavenly Kingdom” that lasted 14 years and led to the deaths of tens of millions.)

And yet, there is a disadvantage in the rational pragmatism of Chinese religions — without a dreamy idealism, a culture can stagnate and become too accepting of evils. Chinese novelist Yan Lianke, who is an atheist, has remarked:

In China, the development of religion is the best lens through which to view the health of a society. Every religion, when it is imported to China is secularized. The Chinese are profoundly pragmatic. . . . What is absent in Chinese civilization, what we’ve always lacked, is a sense of the sacred. There is no room for higher principles when we live so firmly in the concrete. The possibility of hope and the aspiration to higher ideals are too abstract and therefore get obliterated in our dark, fierce realism.” (“Yan Lianke’s Forbidden Satires of China,” The New Yorker, 8 Oct 2018)

Now, Christianity is not alone in positing a transcendent good — Judaism and Islam also do this. But there are other particular qualities of Christianity that we must look to as well.

(2) Individual Worth.

To some extent, all religions value the individual human being. Yet, individual worth is central to Christianity in a way that is not found in other religions. The religions of India certainly value human life, and there are strong elements of pacifism in these religions. But these religions also tend to devalue individuality, in the sense that the ultimate goal is to overcome selfhood and merge with a larger spirit. Confucianism emphasizes moral duty, from the lowest members of society to the highest; individual worth is recognized, but the individual is still part of a hierarchy, and serves that hierarchy. In Taoism, the individual submits to the Way. In Islam, the individual submits to God. In Judaism, the idea of a Chosen People elevates one particular group over others (although this group also falls under the severe judgment of God).

Only under Christianity was the individual human being, whatever that person’s background, elevated to the highest worth. Jesus’ teachings on love and forgiveness, regardless of a person’s status and background, became central to Western civilization — though frequently violated in practice. Jesus’s vision of the afterlife emphasized not a merger with a universal spirit, but a continuance of individual life, free of suffering, in heaven.

3. Separation of religion and government.

Throughout history, the relation between religious institutions and government have varied. In some states, religion and government were unified, as in the Islamic caliphate. In most other cases, political authorities were not religious leaders, but priests were part of the ruling class that assisted the rulers. In China, Confucianism played a major role in the administrative bureaucracy, but Confucianism was a mild and rational religion that had no interest in pursuing and punishing heretics. In Judaism, rabbis often had some actual political power, depending on the historical period and location, but their power was never absolute.

Christianity originated with the martyrdom of a powerless man at the hands of an oppressive government and an intolerant society. In subsequent years, this minor sect was persecuted by the Roman empire. This persecution lasted for several hundred years; at no time during this period did Christianity receive the support, approval, or even tolerance of the imperial government.

Few other religions have originated in such an oppressive atmosphere and survived. China absorbed Confucianism, Taoism, and Buddhism without wars and extensive persecution campaigns. Hinduism, Buddhism, and Jainism grew out of the same roots and largely tolerated each other. Islam had its enemies in its early years, but quickly triumphed in a series of military campaigns that built a great empire. Even the Jews, one of the most persecuted groups in history, were able to practice their religion in their own state(s) for hundreds of years before military defeat and diaspora; in 1948, the Jews again regained a state.

Now, it is true that in the 4th century A.D., Christianity became the official state religion of the Roman empire, and the Christian persecution of pagan worshippers began. Over the centuries, the Catholic Church exercised enormous influence over the culture, economy, and politics of Europe. But by the 18th and 19th centuries, the idea of a strict separation between church and state became widely popular, first in America, then in Europe. While Christian churches fought this reduction in Christian political power and influence, the separation of Church and state was at least compatible with the origins of Christianity in persecution and martyrdom, and did not violate the core beliefs of Christianity.

4. A meaningful history.

The idea that history consists of a progressive movement toward an ideal end is not common to all cultures. Ancient Greeks and Romans saw history as a long decline from an original “Golden Age,” or they saw history as essentially cyclical, consisting of a never-ending rise and decline of various civilizations. The historical views of Hinduism, Buddhism, Taoism, and Confucianism were also cyclical.

It was Judaism, Christianity, and Islam that interpreted history as progressing toward an ideal end, a kingdom of heaven. But as a result of the Renaissance in the West, and then the Enlightenment, the idea of an otherworldly kingdom was dumped, and the ideal end of history became secularized. The German philosopher Hegel (1770-1831) interpreted history as a dialectic clash of ideas, moving toward its ultimate end, which was human freedom. (An early enthusiast for the French Revolution, Hegel once referred to Napoleon as the “world soul” on horseback.) Karl Marx took Hegel’s vision one step further, removing Hegel’s idealism and positing a “dialectical materialism” based on class conflict. This class conflict, according to Marx, would one day end in a final, bloody clash that would end class distinctions and bring about the full equality of human beings under communism.

Alas, these dreams of earthly utopia did not come to pass. Napoleon crowned himself emperor in 1804 and went to work creating a new dynasty and aristocracy with which to rule Europe. In the twentieth century, Communist regimes were extraordinarily oppressive everywhere they arose, killing tens of millions of people. Certainly, the idea of human equality was attractive, and political movements arose and took power based on these ideas. Yet the results were bloodshed and tyranny. Even so, when Soviet communism collapsed, the idea of a secular “end of history,” based on the thought of Hegel, became popular again.

According to the American Christian theologian Reinhold Niebuhr, the visions of Hegel and Marx were merely secular versions of Christianity, which failed because, while ostensibly dedicated to the principles of individual worth, equality, and historical progress, they could not overcome the essential fact of human sinfulness. In Christianity, this sinfulness was the basis for the prophecies in the Book of Revelation which foresaw a final battle between good and evil, requiring the intervention of God in order to achieve a final triumph of good.

According to Niebuhr, the fundamental error of all secular ideologies of historical progress was to suppose that the ability of human beings to reason could conquer tendencies to sinfulness in the same way that advances in science could conquer nature. This did not work, in Niebuhr’s view, because reason could be a tool of self-aggrandizement as well as selflessness, and was therefore insufficient to support universal brotherhood. The fundamental truth about human nature, that the Renaissance and the Enlightenment neglected, was that man is an unbreakable organic unity of mind, body, and spirit. Man’s increasing capacity to use reason resulted in new technologies and wealth but did not — and could not — overcome human tendencies to seek power. For this reason, human history was the story of the growth of both good and evil and not the triumph of good over evil. Only the intervention of God, through Christ, could bring the final fulfillment of history. Certainly, belief in this ultimate fulfillment requires a leap of faith — but whether or not one believes the Book of Revelation, it is hard to deny that human dreams of earthly utopia have been frustrated time and time again.

Perhaps at this point, you may agree with my general assessment of Christian ideas, and even find some similarities between Christian ideas and contemporary secular liberalism. Nevertheless, you may also conclude that the causal linkage between Christianity and modern liberalism has not been established. After all, the first modern liberal democracies did not emerge until nearly 1800 years after Christ. Why so long? Why did the Christian churches have such a long record of intolerance and contempt for liberal ideas? Why did the Catholic Church so often ally with monarchs, defend feudalism, and oppose liberal revolutions? Why did various Christian churches tolerate and approve of slavery for hundreds of years? I will address these issues in Part Three.

The Influence of Christianity on Western Culture, Part One: Liberty, Equality, and Human Rights

Does religion have a deep influence on the minds of those living in a largely secular culture, shaping the subconscious beliefs and assumptions of even staunch atheists? Such is the argument of New York Times columnist Ross Douthat, who argues that the contemporary secular liberalism of America and Europe is rooted in the principles of Christianity, and that our civilization suffers when it borrows selectively from Christianity while rejecting the religion as a whole.

Douthat’s provocative claim was challenged by liberal commentators Will Saletan and Julian Sanchez, and if you have time, you can review the three-sided debate here, here, here, here, and here. In brief, Douthat argues the following:

When I look at your secular liberalism, I see a system of thought that looks rather like a Christian heresy, and not necessarily a particularly coherent one at that. In Bad Religion, I describe heresy as a form of belief that tends to emphasize certain elements of the Christian synthesis while downgrading or dismissing other aspects of that whole. And it isn’t surprising that liberalism, which after all developed in a Christian civilization, does exactly that, drawing implicitly on the Christian intellectual inheritance to ground its liberty-equality-fraternity ideals.

Indeed, it’s completely obvious that absent the Christian faith, there would be no liberalism at all. No ideal of universal human rights without Jesus’ radical upending of social hierarchies (including his death alongside common criminals on the cross). No separation of church and state without the gospels’ ‘render unto Caesar’ and St. Augustine’s two cities. No liberal confidence about the march of historical progress without the Judeo-Christian interpretation of history as an unfolding story rather than an endlessly repeating wheel. . . .

And the more purely secular liberalism has become, the more it has spent down its Christian inheritance—the more its ideals seem to hang from what Christopher Hitchens’ Calvinist sparring partner Douglas Wilson has called intellectual ‘skyhooks,’ suspended halfway between our earth and the heaven on which many liberals have long since given up. 

Julian Sanchez, a scholar with the Cato Institute, responds to Douthat by noting that societies don’t need to agree on God and religion to support human rights, only to agree that human rights are good. According to Sanchez, invoking God as the source of goodness doesn’t really solve any problems; at best, it provides one prudential reasons to behave well (i.e., to obtain rewards and avoid punishment in the afterlife). If we believe human rights are good and need to be preserved, the idea of God adds nothing to the belief: “The notion seems to be that someone not (yet) convinced of Christian doctrine would have strong reasons—strong humanistic reasons—to hope for a world in which human dignity and individual rights are respected. But then why aren’t these reasons enough to do the job on their own?” Furthermore, Sanchez argues that morals can be regarded as “normative properties” that are already part of reality, and that secular moralists can appeal to this reality just as easily as believers appeal to God, only normative properties don’t require beliefs about implausible deities and “Middle Eastern folklore.”

Both Douthat and Sanchez make some good arguments, but there are some weaknesses in both sides’ claims that I wish to explore in this extended essay. My view, in brief, is this: Christianity, or any other religion, does not have to be a package deal. Religious claims about various miracles that seem to violate the patterns of nature established by science or the empirical findings of history and archeology should be subject to scrutiny and skepticism like any other claim. Traditional morals that have long-standing religious justifications, from child marriage to slavery, should be subject to the same scrutiny, and rejected when necessary.

And yet, it is difficult to deny the influence of religion on our perceptions — and conceptions — of what is good. I find existing attempts to base human morality and rights solely on reason and science to be unpersuasive; morals are not like the patterns of nature, nor can they be proved by the deductive methods of reason without accepting premises that cannot be proved. While rooted in reality, morals seem to point to something higher than our current reality. And human freedom to choose defies our attempts to prove the existence of morals in the same way that we can prove the deterministic patterns of gravity, chemical reactions, and nuclear fission.

____________________________

Let us consider one such attempt to establish human rights through science and reason by Michael Shermer, director of the The Skeptics Society and founder of Skeptic magazine. In an article for Theology and Science, Shermer attempts to found human rights on reason and science, relying exclusively on “nature and nature’s laws.”

Mr. Shermer begins his essay by noting the many people in Europe that were put to death for the crime of “witchcraft” in the 15th through 17th centuries, and how this witch-hunting hysteria was endorsed by the Catholic Church. Fortunately, notes Mr. Shermer, “scientific naturalism,” the “principle that the world is governed by natural laws and forces that can be understood” and “Enlightenment humanism” arose in the 18th and 19th centuries, destroying the old superstitions of religion. Shermer cites Steven Pinker to explain how the application of scientific naturalism to human affairs provided the principles on which human societies made moral progress:

When a large enough community of free, rational agents confers on how a society should run its affairs, steered by logical consistency and feedback form the world, their consensus will veer in certain directions. Just as we don’t have to explain why molecular biologists discovered that DNA has four bases . . . we may not have to explain why enlightened thinkers would eventually argue against African slavery, cruel punishments, despotic monarchs, and the execution of witches and heretics.

Shermer argues that morals follow logically from reason and observation, and proposes a Principle of Moral Good: “Always act with someone else’s moral good in mind, and never act in a way that it leads to someone else’s moral loss (through force or fraud).”

Unfortunately, this principle, allegedly founded on reason and science, appears to be simply another version of the “Golden Rule,” which has been in existence for over two thousand years, and is found in nearly all the major religions. (The West knows the Golden Rule mainly through Christianity.) None of the religions discovered this rule through science or formal logical deduction. Human rights are not subject to empirical proof like the laws of physics and they don’t follow logically from deductive arguments, unless one begins with premises that support — or even presuppose — the conclusion.

Human rights are a cultural creation. They don’t exist in nature, at least not in a way that we can observe them. To the extent human rights exist, they exist in social practices and laws — sometimes only among a handful of people, sometimes only for certain categories of persons, sometimes widely in society. People can choose to honor and respect human rights, or violate such rights, and do so with impunity.

For this reason, I regard human rights as a transcendent value, something that does not exist in nature, but that many of us regard as worth aspiring to. In a previous essay on transcendence, I noted:

The odd thing about transcendence is that because it seems to refer to a striving for an ideal or a goal that goes above and beyond an observed reality, transcendence has something of an unreal quality. It is easy to see that rocks and plants and stars and animals and humans exist. But the transcendent cannot be directly seen, and one cannot prove the transcendent exists. It is always beyond our reach. . . . We worship the transcendent not because we can prove it exists, but because the transcendent is always drawing us to a higher life, one that excels or supersedes who we already are.

The evils that have human beings have afflicted on other human beings throughout history cannot all be attributed to superstitions and mistaken beliefs, whether about witchcraft or the alleged inferiority of certain races. Far more people have been killed in wars for territory, natural resources, control of trade routes, and for the power to rule than have been killed by accusations of witchcraft. And why not? Is it not compatible with reason to desire wealth and power? The entire basis of economics is that people are going to seek to maximize their wealth. And the basis of modern liberal-democracy is the idea that checks and balances are needed to block excessive power-seeking, that reason itself is insufficient. Historians don’t ask why princes seek to be kings, and why nations seek to expand their territory — it is taken for granted that these desires are inherent to human beings and compatible with reason. As for slavery, it may have been justified by the reference to certain races as inferior, but the pursuit of wealth was the main motivation of slave owners, with the justifications tacked on for appearance’s sake. After all, the fact that states in the American south felt compelled to pass laws forbidding the teaching of blacks indicates that southerners did in fact see blacks as human beings capable of reasoning.

The problem with relying on reason as a basis for human rights is that reason in itself is unable to bridge the gap between desiring one’s own good and desiring the same good for others. It is a highly useful premise in economics and political science that human beings are going to act to maximize their own good. From this premise, many important and useful theories have been developed. Acting for the good of others, on the other hand, particularly when it involves a high degree of self-sacrifice, is extremely variable. It takes place within families, to a limited extent it results in charitable contributions to strangers, and in some cases, soldiers and emergency rescue workers give their lives to save others. But it’s not reason that’s the motivating factor here — it’s love and sympathy and a sense of duty. Reason, on the other hand, is the tool that tells you how much you can give to others without going broke.

Still, it’s one thing to criticize reason as the basis of human rights; it is quite another to provide credit to Christianity for human rights. The historical record of Christianity with regard to human rights is not one that inspires. Nearly all of the Christian churches have been guilty of instigating, endorsing, or tolerating slavery, feudalism, despotism, wars, and torture, for hundreds of years. The record is long and damning.

Still, is it possible that Christianity provided the cultural assumptions, categories, and framework for the eventual flourishing of human rights? After all, neither the American Revolution nor the French Revolution were successful at first in fully implementing human rights. America fought a civil war before slavery was ended, did not allow women to vote until 1920, and did not grant most blacks a consistently recognized right to vote until the 1960s. The French Revolution of 1789 degenerated into terror, dictatorship, and wars of conquest; it took many decades for France to attain a reasonably stable republic. The pursuit of human rights even under secular liberalism was a long, hard struggle, in which ideals only very gradually became fully realized.

This long struggle to implement liberal ideals raises the question: Is it possible that Christianity had a long-term impact on the development of the West that we don’t recognize because we had already absorbed Christian assumptions and premises in our reason and did not question them? This is the question that will be addressed in subsequent posts.

The Dynamic Quality of Henri Bergson

Robert Pirsig writes in Lila that Quality contains a dynamic good in addition to a static good. This dynamic good consists of a search for “betterness” that is unplanned and has no specific destination, but is nevertheless responsible for all progress. Once a dynamic good solidifies into a concept, practice, or tradition in a culture, it becomes a static good. Creativity, mysticism, dreams, and even good guesses or luck are examples of dynamic good in action. Religious traditions, laws, and science textbooks are examples of static goods.

Pirsig describes dynamic quality as the “pre-intellectual cutting edge of reality.” By this, he means that before concepts, logic, laws, and mathematical formulas are discovered, there is process of searching and grasping that has not yet settled into a pattern or solution. For example, invention and discovery is often not an outcome of calculation or logical deduction, but of a “free association of ideas” that tends to occur when one is not mentally concentrating at all. Many creative people, from writers to mathematicians, have noted that they came up with their best ideas while resting, engaging in everyday activities, or dreaming.

Dynamic quality is not just responsible for human creation — it is fundamental to all evolution, from the physical level of atoms and molecules, to the biological level of life forms, to the social level of human civilization, to the intellectual level of human thought. Dynamic quality exists everywhere, but it has no specific goals or plans — it always consists of spur-of-the-moment actions, decisions, and guesses about how to overcome obstacles to “betterness.”

It is difficult to conceive of dynamic quality — by its very nature, it is resistant to conceptualization and definition, because it has no stable form or structure. If it did have a stable form or structure, it would not be dynamic.

However the French philosopher Henri Bergson (1859-1941) provided a way to think about dynamic quality, by positing change as the fundamental nature of reality. (See Beyond the “Mechanism” Metaphor in Physics.) In Bergson’s view, traditional reason, science, and philosophy created static, eternal forms and posited these forms as the foundation of reality — but in fact these forms were tools for understanding reality and not reality itself. Reality always flowed and was impossible to fully capture in any static conceptual form. This flow could best be understood through perception rather than conception. Unfortunately, as philosophy created larger and larger conceptual categories, philosophy tended to become dominated by empty abstractions such as “substance,” “numbers,” and “ideas.” Bergson proposed that only an intuitive approach that enlarged perceptual knowledge through feeling and imagination could advance philosophy out of the dead end of static abstractions.

________________________

The Flow of Time

Bergson argued that we miss the flow of time when we use the traditional tools of science, mathematics, and philosophy. Science conceives of time as simply one coordinate in a deterministic space-time block ruled by eternal laws; mathematics conceives of time as consisting of equal segments on a graph; and philosophers since Plato have conceptualized the world as consisting of the passing shadows of eternal forms.

These may be useful conceptualizations, argues Bergson, but they do not truly grasp time. Whether it is an eternal law, a graph, or an eternal form, such depictions are snapshots of reality; they do not and cannot represent the indivisible flow of time that we experience. The laws of science in particular neglected the elements of indeterminism and freedom in the universe. (Henri Bergson once debated Einstein on this topic). The neglect of real change by science was the result of science’s ambition to foresee all things, which motivated scientists to focus on the repeatable and calculable elements of nature, rather than the genuinely new. (The Creative Mind, Mineola, New York: Dover, 2007, p. 3) Those events that could not be predicted were tossed aside as being merely random or unknowable. As for philosophy, Bergson complained that the eternal forms of the philosophers were empty abstractions — the categories of beauty and justice and truth were insufficient to serve as representations of real experience.

Actual reality, according to Bergson, consisted of “unceasing creation, the uninterrupted upsurge of novelty.” (The Creative Mind, p. 7) Time was not merely a coordinate for recording motion in a determinist universe; time was “a vehicle of creation and choice.” (p. 75) The reality of change could not be captured in static concepts, but could only be grasped intuitively. While scientists saw evolution as a combination of mechanism and random change, Bergson saw evolution as a result of a vital impulse (élan vital) that pervaded the universe. Although this vital impetus possessed an original unity, individual life forms used this vital impetus for their own ends, creating conflict between life forms. (Creative Evolution, pp. 50-51)

Biologists attacked Bergson on the grounds that there was no “vital impulse” that they could detect and measure. But biologists argued from the reductionist premise that everything could be explained by reference to smaller parts, and since there was no single detectable force animating life, there was no “vital impetus.” But Bergson’s premise was holistic, referring to the broader action of organic development from lower orders to higher orders, culminating in human beings. There was no separate force — rather entities organized, survived, and reproduced by absorbing and processing energy, in multiple forms. In the words of one eminent biologist, organisms are “resilient patterns . . . in an energy flow.” There is no separate or unique energy of life – just energy.

The Superiority of Perception over Conception

Bergson believed with William James that all knowledge originated in perception and feeling; as human mental powers increased, conceptual categories were created to organize and generalize what we (and others) discovered through our senses. Concepts were necessary to advance human knowledge, of course. But over time, abstract concepts came to dominate human thought to the point at which pure ideas were conceived as the ultimate reality — hence Platonism in philosophy, mathematical Platonism in mathematics, and eternal laws in science. Bergson believed that although we needed concepts, we also needed to rediscover the roots of concepts in perception and feeling:

If the senses and the consciousness had an unlimited scope, if in the double direction of matter and mind the faculty of perceiving was indefinite, one would not need to conceive any more than to reason. Conceiving is a make-shift when perception is not granted to us, and reasoning is done in order to fill up the gaps of perception or to extend its scope. I do not deny the utility of abstract and general ideas, — any more than I question the value of bank-notes. But just as the note is only a promise of gold, so a conception has value only through the eventual perceptions it represents. . . . the most ingeniously assembled conceptions and the most learnedly constructed reasonings collapse like a house of cards the moment the fact — a single fact rarely seen — collides with these conceptions and these reasonings. There is not a single metaphysician, moreover, not one theologian, who is not ready to affirm that a perfect being is one who knows all things intuitively without having to go through reasoning, abstraction and generalisation. (The Creative Mind, pp. 108-9)

In the end, despite their obvious utility, the conceptions of philosophy and science tend “to weaken our concrete vision of the universe.” (p. 111) But we clearly do not have God-like powers to perceive everything, and we are not likely to get such powers. So what do we do? Bergson argues that instead of “trying to rise above our perception of things” through concepts, we “plunge into [perception] for the purpose of deepening it and widening it.” (p. 111) But how exactly are we to do this?

Enlarging Perception

There is one group of people, argues Bergson, that have mastered the ability to deepen and widen perception: artists. From paintings to poetry to novels and musical compositions, artists are able to show us things and events that we do not directly perceive and evoke a mood within us that we can understand even if the particular form that the artist presents may never have been seen or heard by us before. Bergson writes that artists are idealists who are often absent-mindedly detached from “reality.” But it is precisely because artists are detached from everyday living that they are able to see things that ordinary, practical people do not:

[Our] perception . . . isolates that part of reality as a whole that interests us; it shows us less the things themselves than the use we can make of them. It classifies, it labels them beforehand; we scarcely look at the object, it is enough for us to know which category it belongs to. But now and then, by a lucky accident, men arise whose senses or whose consciousness are less adherent to life. Nature has forgotten to attach their faculty of perceiving to their faculty of acting. When they look at a thing, they see it for itself, and not for themselves. They do not perceive simply with a view to action; they perceive in order to perceive — for nothing, for the pleasure of doing so. In regard to a certain aspect of their nature, whether it be their consciousness or one of their senses, they are born detached; and according to whether this detachment is that of a particular sense, or of consciousness, they are painters or sculptors, musicians or poets. It is therefore a much more direct vision of reality that we find in the different arts; and it is because the artist is less intent on utilizing his perception that he perceives a greater number of things. (The Creative Mind, p. 114)

The Method of Intuition

Bergson argued that the indivisible flow of time and the holistic nature of reality required an intuitive approach, that is “the sympathy by which one is transported into the interior of an object in order to coincide with what there is unique and consequently inexpressible in it.” (The Creative Mind, p. 135) Analysis, as in the scientific disciplines, breaks down objects into elements, but this method of understanding is a translation, an insight that is less direct and holistic than intuition. The intuition comes first, and one can pass from intuition to analysis but not from analysis to intuition.

In his essay on the French philosopher Ravaisson, Bergson underscored the benefits and necessity of an intuitive approach:

[Ravaisson] distinguished two different ways of philosophizing. The first proceeds by analysis; it resolves things into their inert elements; from simplification to simplification it passes to what is most abstract and empty. Furthermore, it matters little whether this work of abstraction is effected by a physicist that we may call a mechanist or by a logician who professes to be an idealist: in either case it is materialism. The other method not only takes into account the elements but their order, their mutual agreement and their common direction. It no longer explains the living by the dead, but, seeing life everywhere, it defines the most elementary forms by their aspiration toward a higher form of life. It no longer brings the higher down to the lower, but on the contrary, the lower to the higher. It is, in the real sense of the word, spiritualism. (p. 202)

From Philosophy to Religion

A religious tendency is apparent in Bergson’s philosophical writings, and this tendency grew more pronounced as Bergson grew older. It is likely that Bergson saw religion as a form of perceptual knowledge of the Good, widened by imagination. Bergson’s final major work, The Two Sources of Morality and Religion (Notre Dame, IN: University of Notre Dame Press, 1977) was both a philosophical critique of religion and a religious critique of philosophy, while acknowledging the contributions of both forms of knowledge. Bergson drew a distinction between “static religion,” which he believed originated in social obligations to society, and “dynamic religion,” which he argued originated in mysticism and put humans “in the stream of the creative impetus.” (The Two Sources of Morality and Religion, p. 179)

Bergson was a harsh critic of the superstitions of “static religion,” which he called a “farrago of error and folly.” These superstitions were common in all cultures, and originated in human imagination, which created myths to explain natural events and human history. However, Bergson noted, static religion did play a role in unifying primitive societies and creating a common culture within which individuals would subordinate their interests to the common good of society. Static religion created and enforced social obligations, without which societies could not endure. Religion also provided comfort against the depressing reality of death. (The Two Source of Morality and Religion, pp. 102-22)

In addition, it would be a mistake, Bergson argued, to suppose that one could obtain dynamic religion without the foundation of static religion. Even the superstitions of static religion originated in the human perception of a beneficent virtue that became elaborated into myths. Perhaps thinking that a cool running spring or a warm fire on the hearth as the actions of spirits or gods were a case of imagination run rampant, but these were still real goods, as were the other goods provided by the pagan gods.

Dynamic religion originated in static religion, but also moved above and beyond it, with a small number of exceptional human beings who were able to reach the divine source: “In our eyes, the ultimate end of mysticism is the establishment of a contact . . . with the creative effort which life itself manifests. This effort is of God, if it is not God himself. The great mystic is to be conceived as an individual being, capable of transcending the limitations imposed on the species by its material nature, thus continuing and extending the divine action.” (pp. 220-21)

In Bergson’s view, mysticism is intuition turned inward, to the “roots of our being , and thus to the very principle of life in general.” (p. 250) Rational philosophy cannot fully capture the nature of mysticism, because the insights of mysticism cannot be captured in words or symbols, except perhaps in the word “love”:

God is love, and the object of love: herein lies the whole contribution of mysticism. About this twofold love the mystic will never have done talking. His description is interminable, because what he wants to describe is ineffable. But what he does state clearly is that divine love is not a thing of God: it is God Himself. (p. 252)

Even so, just as the dynamic religion bases its advanced moral insights in part on the social obligations of static religion, dynamic religion also must be propagated through the images and symbols supplied by the myths of static religion. (One can see this interplay of static and dynamic religion in Jesus and Gandhi, both of whom were rooted in their traditional religions, but offered original teachings and insights that went beyond their traditions.)

Toward the end of his life, Henri Bergson strongly considered converting to Catholicism (although the Church had already placed three of Bergson’s works on its Index of Prohibited Books). Bergson saw Catholicism as best representing his philosophical inclinations for knowing through perception and intuition, and for joining the vital impetus responsible for creation. However, Bergson was Jewish, and the anti-Semitism of 1930s and 1940s Europe made him reluctant to officially break with the Jewish people. When the Nazis conquered France in 1940 and the Vichy puppet government of France decided to persecute Jews, Bergson registered with the authorities as a Jew and accepted the persecutions of the Vichy regime with stoicism. Bergson died in 1941 at the age of 81.

Once among the most celebrated intellectuals in the world, today Bergson is largely forgotten. Even among French philosophers, Bergson is much less known than Descartes, Sartre, Comte, and Foucault. It is widely believed that Bergson lost his debate with Einstein in 1922 on the nature of time. (See Jimena Canales, The Physicist and the Philosopher: Einstein, Bergson, and the Debate that Changed Our Understanding of Time, p. 6) But it is recognized today even among physicists that while Einstein’s conception of spacetime in relativity theory is an excellent theory for predicting the motion of objects, it does not disprove the existence of time and real change. It is also true that Bergson’s writings are extraordinarily difficult to understand at times. One can go through pages of dense, complex text trying to understand what Bergson is saying, get suddenly hit with a colorful metaphor that seems to explain everything — and then have a dozen more questions about the meaning of the metaphor. Nevertheless, Bergson remains one of the very few philosophers who looked beyond eternal forms to the reality of a dynamic universe, a universe moved by a vital impetus always creating, always changing, never resting.

Beyond the “Mechanism” Metaphor in Physics

In previous posts, I discussed the use of the “mechanism” metaphor in science. I argued that this metaphor was useful historically in helping us to make progress in understanding cause-and-effect patterns in nature, but was limited or even deceptive in a number of important respects. In particular, the field of biology is characterized by evidence of spontaneity, adaptability, progress, and cooperative behavior among life forms that make the mechanism metaphor inadequate in characterizing and explaining life.

Physics is widely regarded as the pinnacle of the “hard sciences” and, as such, the field most suited to the mechanism metaphor. In fact, many physicists are so wedded to the idea of the universe as a mechanism, that they are inclined to speak as if the universe literally was a mechanism, that we humans are actually living inside a computer simulation. Why alien races would go through the trouble of creating simulated humans such as ourselves, with such dull, slow-moving lives, is never explained. But physicists are able to get away with these wild speculations because of their stupendous success in explaining and predicting the motion and actions of objects, from the smallest particles to the largest galaxies.

Fundamental to the success of physics is the idea that all objects are subject to laws that determine their behavior. Laws are what determine how the various parts of the universal mechanism move and interact. But when one starts asking questions about what precisely physical laws are and where they come from, one runs into questions and controversies that have never been successfully resolved.

Prior to the Big Bang theory, developed in the early twentieth century, the prevailing theory among physicists was that the universe existed eternally and had no beginning. When an accumulation of astronomical observations about the expansion of the universe led to the conclusion that the universe probably began from a single point that rapidly expanded outward, physicists gradually came to accept that the idea that the universe had a beginning, in a so-called “Big Bang.” However, this raised a problem: if laws ran the universe, and the universe had a beginning, then the laws must have preexisted the universe. In fact, the laws must have been eternal.

But what evidence is there for the notion that the laws of the universe are eternal? Does it really make sense to think of the law of gravity as existing before the universe existed, before gravity itself existed, before planets, stars, space, and time existed? Does it make sense to think of the law of conservation of mass existing before mass existed, or Mendel’s laws of genetics existing before genes existed? Where and how did they exist? If you take the logic of physics far enough, one is apt to conclude that the laws of physics are some kind of God(s), or that God is a mechanism.

Furthermore, what is the evidence for the notion that laws completely determine the motion of every particle in the universe, that the universe is deterministic? Observations and experiments under controlled conditions confirmed that the laws of Newtonian physics could indeed predict the motions of various objects. But did these observations and experiments prove that all objects everywhere behaved in completely predictable patterns?

Despite some fairly large holes in the ideas of eternal laws and determinism, both ideas have been popular among physicists and among many intellectuals. There have been dissenters, however.

The French philosopher Henri Bergson (1859-1941) argued that the universe was in fact a highly dynamic system with a large degree of freedom within it. According to Bergson, our ideas about eternal laws originated in human attempts to understand the reality of change by using fixed, static concepts. These concepts were useful tools — in fact, the tools had to be fixed and static in order to be useful. But the reality that these concepts pointed to was in fact flowing, all “things” were in flux, and we made a major mistake by equating our static concepts with reality and positing a world of eternal forms, whether that of Plato or the physicists. Actual reality, according to Bergson, was “unceasing creation, the uninterrupted up-surge of novelty.” (Henri Bergson, The Creative Mind, p. 7) Moreover, the flow of time was inherently continuous; we could try to measure time by chopping it into equal segments based on the ticking of a clock or by drawing a graph with units of time along one axis, but real time did not consist of segments any more than a flowing river consisted of segments. Time is a “vehicle of creation and choice” that refutes the idea of determinism. (p. 75)

Bergson did not dispute the experimental findings of physics, but argued that the laws of physics were insufficient to describe what the universe was really like. Physicists denied the reality of time and “unceasing creation,” according to Bergson, because scientists were searching for repeatable patterns, paying little or no attention to what was genuinely new:

[A]gainst this idea of the absolute originality and unforeseeability of forms our whole intellect rises in revolt. The essential function of our intellect, as the evolution of life has fashioned it, is to be a light for our conduct, to make ready for our action on things, to foresee, for a given situation, the events, favorable or unfavorable, which may follow thereupon. Intellect therefore instinctively selects in a given situation whatever is like something already known. . .  Science carries this faculty to the highest possible degree of exactitude and precision, but does not alter its essential character. Like ordinary knowledge, in dealing with things science is concerned only with the aspect of repetition. (Henri Bergson, Creative Evolution, p. 29)

Bergson acknowledged the existence of repetitive patterns in nature, but rather than seeing these patterns as reflecting eternal and wholly deterministic laws, Bergson proposed a different metaphor. Drawing upon the work of the French philosopher Felix Ravaisson, Bergson argued that nature develops “habits” of behavior in the same manner that human beings develop habits, from initial choices of behavior that over time become regular and subconscious: “Should we not then imagine nature, in this form, as an obscured consciousness and a dormant will? Habit thus gives us the living demonstration of this truth, that mechanism is not sufficient to itself: it is, so to speak, only the fossilized residue of a spiritual activity.” In Bergson’s view, spiritual activity was the ultimate foundation of reality, not the habits/mechanisms that resulted from it (The Creative Mind, pp. 197-98, 208).

Bergson’s views did not go over well with most scientists. In 1922, in Paris, Henri Bergson publicly debated Albert Einstein about the nature of time. (See Jimena Canales, The Physicist and the Philosopher: Einstein, Bergson, and the Debate that Changed Our Understanding of Time). Einstein’s theory of relativity posited that there was no absolute time that ticked at the same rate for every body in the universe. Time was linked to space in a single space-time continuum, the movement of bodies was entirely deterministic, and this movement could be predicted by calculating the space-time coordinates of these bodies. In Einstein’s view, there was no sharp distinction between past, present, and future — all events existed in a single block of space-time. This idea of a “block universe” is still predominant in physics today, though it is not without dissenters.

Most people have a “presentist” view of reality.

But physicists prefer the “block universe” view, in which all events are equally real.

Source: Time in Cosmology

 

In fact, when Einstein’s friend Michele Besso passed away in 1955, Einstein wrote a letter of condolence to Besso’s family in which he expressed his sympathies to the family but also declared that the separation between past, past, and future was an illusion anyway, so death did not mean anything. (The Physicist and the Philosopher, pp. 338-9)

It is widely believed that Bergson lost his 1922 debate with Einstein, in large part because Bergson did not fully understand Einstein’s theory of relativity. Nevertheless, while physicists everywhere eventually came to accept relativity, many rejected Einstein’s notion of a completely determinist universe which moved as predictably as a mechanism. The French physicist Louis de Broglie and the Japanese physicist Satosi Watanabe were proponents of Bergson and argued that the indeterminacy of subatomic particles supported Bergson’s view of the reality of freedom, the flow of time, and change. Einstein, on the other hand, never did accept the indeterminacy of quantum physics and insisted to his dying day that there must be “hidden” variables that would explain everything.  (The Physicist and the Philosopher, pp. 234-38)

 

_____________________________

 

Moving forward to the present day, the debate over the reality of time has been rekindled by Lee Smolin, a theoretical physicist at the Perimeter Institute for Theoretical Physics. In Time Reborn, Smolin proposes that time is indeed real and that the neglect of this fact has hindered progress in physics and cosmology. Contrary to what you may have been taught in your science classes, Smolin argues that the laws of nature are not eternal and precise but emergent and approximate. Borrowing the theory of evolution from biology, Smolin argues that the laws of the universe evolve over time, that genuine novelty is real, and that the laws are not precise iron laws but approximate, granting a degree of freedom to what was formerly considered a rigidly deterministic universe.

One major problem with physics, Smolin argues, is that scientists tend to generalize or extrapolate based on conclusions drawn from laboratory experiments conducted under highly controlled conditions, with extraneous variables carefully excluded — Smolin calls this “physics in a box.” Now there is nothing inherently wrong with “physics in a box” — carefully controlled experiments that exclude extraneous variables are absolutely essential to progress in scientific knowledge. The problem is that one cannot take a law derived from such a controlled experiment and simply scale it up to apply to the entire universe; Smolin calls this the “cosmological fallacy.” As Smolin argues, it makes no sense to simply scale up the findings from these controlled experiments, because the universe contains everything, including the extraneous variables! Controlled experiments are too restricted and artificial to serve as an adequate basis for a theory that includes everything. Instead of generalizing from the bottom up based on isolated subsystems of the universe, physicists must construct theories of the whole universe, from the top down. (Time Reborn, pp. 38-39, 97)

Smolin is not the first scientist to argue that the laws of nature may have evolved over time. Smolin points to the eminent physicists Paul Dirac, John Archibald Wheeler, and Richard Feynman as previous proponents of the idea that the laws may have evolved. (Time Reborn, pp. xxv-xxvi) But all of these theorists were preceded by the American philosopher and scientist Charles Sanders Peirce (1839-1914), who argued that “the only possible way of accounting for the laws of nature and for uniformity in general is to suppose them results of evolution.” (Time Reborn, p. xxv) Dr. Smolin gives credit to Charles Sanders Peirce for originating this idea, and proposes two ways in which the laws of nature have evolved.

The first way is through a series of “Big Bangs,” in which each new universe selects different laws each time. Smolin argues that there must have been an endless succession of Big Bangs in the past which have led to our current universe with its particular set of laws. (p. 120) Furthermore, Smolin proposes that black holes create new, baby universes, each with its own laws — so the black holes in our universe are the parents of other universes, and our own universe is the child of a black hole in some other universe! (pp. 123-25) Unfortunately, it seems impossible to adequately prove this theory, unless there is some possible way of observing these other universes with their different laws.

Smolin also proposes that laws can arise at the quantum level based on what he calls the “principle of precedence.” Smolin makes an analogy to Anglo-Saxon law, in which the decisions of judges in the past serve as precedents for decisions made today and in the future, in an ever-growing body of “common law.” The idea is that everything in the universe has a tendency to develop habits; when a truly novel event occurs, and then occurs again, and again, it settles into a pattern of repetition; that settled pattern of repetition indicates the development of a new law of nature. The law did not previously exist eternally — it emerged out of habit. (Time Reborn, pp. 146-53) Furthermore, rather than being bound by deterministic laws, the universe remains genuinely open and free, able to build new forms on top of existing forms. Smolin argues, “In the time-bound picture I propose, the universe is a process for breeding novel phenomena and states of organization, which will forever renew itself as it evolves to states of ever higher complexity and organization. The observational record tells us unambiguously that the universe is getting more interesting as time goes on.” (p. 194)

And yet, despite his openness to the idea of genuine novelty in the evolution of the universe, even Smolin is unable to get away from the idea of mechanisms being ultimately responsible for everything. Smolin writes that the universe began with a particular set of initial conditions and then asks “What mechanism selected the actual initial conditions out of the infinite set of possibilities?” (pp. 97-98) He does not consider the possibility that in the beginning, perhaps there was no mechanism. Indeed, this is the problem with any cosmology that aims to provide a total explanation for existence; as one goes back in time searching for origins, one eventually reaches a first cause that has no prior cause, and thus no causal explanation. One either has to posit a creator-God, an eternal self-sufficient mechanism, or throw up one’s hands and accept that we are faced with an unsolvable mystery.

In fact, Smolin is not as radical as his inspiration, Charles Sanders Peirce. According to Peirce, the universe did not start out with a mechanism but rather began from a condition of maximum freedom and spontaneity, only gradually adopting certain “habits” which evolved into laws. Furthermore, even after the development of laws, the universe retained a great deal of chance and spontaneity. Laws specified certain regularities, but even within these regularities, a great deal of freedom still existed. For example, life forms may have been bound to the surface of the earth and subject to the regular rotation of the earth, the orbit of the earth around the sun, and the limitations of biology, but nonetheless life forms still retained considerable freedom.

Peirce, who believed in God, held that the universe was pervaded not by mechanism but mind, which was by definition characterized by freedom and spontaneity. As the mind/universe developed certain habits, these habits congealed into laws and solid matter. In Peirce’s view, “matter . . . [is] mere specialised and partially deadened mind.” (“The Law of Mind,” The Monist, vol. 11, no. 4, July 1892) This view is somewhat similar to the view of the physicist Werner Heisenberg, who noted that “Energy is in fact the substance from which all elementary particles, all atoms and therefore all things are made. . . .”

One contemporary philosopher, Philip Goff of Durham University, following Peirce and other thinkers, has argued that consciousness is not restricted to humans but in fact pervades the universe, from the smallest subatomic particles to the most intelligent human beings. This theory is known as panpsychism. (see Goff’s book Galileo’s Error: Foundations for a New Science of Consciousness) Goff does not argue that atoms, rocks, water, stars, etc. are like humans in their thought process, but that they have experiences, albeit very primitive and simple experiences compared to humans. The difference between the experiences of a human and the experiences of an electron is vast, but the difference still exists on a spectrum; there is no sharp dividing line that dictates that experience ends when one gets down to the level of insects, cells, viruses, molecules, atoms, or subatomic particles. In Dr. Goff’s words:

Human beings have a very rich and complex experience; horses less so; mice less so again. As we move to simpler and simpler forms of life, we find simpler and simpler forms of experience. Perhaps, at some point, the light switches off, and consciousness disappears. But it’s at least coherent to suppose that this continuum of consciousness fading while never quite turning off carries on into inorganic matter, with fundamental particles having almost unimaginably simple forms of experience to reflect their incredibly simple nature. That’s what panpsychists believe. . . .

The starting point of the panpsychist is that physical science doesn’t actually tell us what matter is. . . . Physics tells us absolutely nothing about what philosophers like to call the intrinsic nature of matter: what matter is, in and of itself. So it turns out that there is a huge hole in our scientific story. The proposal of the panpsychist is to put consciousness in that hole. Consciousness, for the panpsychist, is the intrinsic nature of matter. There’s just matter, on this view, nothing supernatural or spiritual. But matter can be described from two perspectives. Physical science describes matter “from the outside,” in terms of its behavior. But matter “from the inside”—i.e., in terms of its intrinsic nature—is constituted of forms of consciousness.

Unfortunately, there is, at present, no proof that the universe is pervaded by mind, nor is there solid evidence that the laws of physics have evolved. We do know that the science of physics is no longer as deterministic as it used to be. The behavior of subatomic particles is not fully predictable, despite the best efforts of physicists for nearly a century, and many physicists now acknowledge this. We also know that the concepts of laws and determinism often fail in the field of biology — there are very few actual laws in biology, and the idea that these laws preexisted life itself seems incoherent. No biologist will tell you that human beings in their present state are the inevitable product of determinist evolution and that if we started the planet Earth all over again, we would end up in 4.5 billion years with exactly the same types of life forms, including humans, that we have now. Nor can biologists predict the movement of life forms the same way that physicists can predict the movement of planets. Life forms do their own thing. Human beings retain their free will and moral responsibility. Still, the notion that the laws of physics are pre-existent and eternal appears to have no solid ground either; it is merely one of those assumptions that has become widely accepted because few have sought to challenge it or even ask for evidence.

Beyond the “Mechanism” Metaphor in Biology

In a previous post, I discussed the frequent use of the “mechanism” metaphor in the sciences. I argued that while this metaphor was useful in spurring research into cause-and-effect patterns in physical and biological entities, it was inadequate as a descriptive model for what the universe and life is like. In particular, the “mechanism” metaphor is unable to capture the reality of change, the evidence of self-driven progress, and the autonomy and freedom of life forms.

I don’t think it’s possible to abandon metaphors altogether in science, including the mechanism metaphor. But I do think that if we are to more fully understand the nature of life, in all its forms, we must supplement the mechanism metaphor with other, additional conceptualizations and metaphors that illustrate dynamic processes.

______________________________

 

David Bohm (1917-1992), one of the most prominent physicists of the 20th century, once remarked upon a puzzling development in the sciences: While 19th century classical physics operated according to the view that the universe was a mechanism, research into quantum physics in the 20th century demonstrated that the behavior of particles at the subatomic level was not nearly as deterministic as the behavior of larger objects, but rather was probabilistic. Nevertheless, while physicists adjusted to this new reality, the science of biology was increasingly adopting the metaphor of mechanism to study life. Remarked Bohm:

 It does seem odd . . . that just when physics is thus moving away from mechanism, biology and psychology are moving closer to it. If this trend continues, it may well be that scientists will be regarding  living and intelligent beings as mechanical, while they suppose that inanimate matter is too complex and subtle to fit into the limited categories of mechanism. But of course, in the long run, such a point of view cannot stand up to critical analysis. For since DNA and other molecules studied by the biologist are constituted of electrons, protons, neutrons, etc., it follows that they too are capable of behaving in a far more complex and subtle way than can be described in terms of mechanical concepts. (Source: David Bohm, “Some Remarks on the Notion of Order,” in Towards a Theoretical Biology, Vol. 2: Sketches, ed. C.H. Waddington, Chicago: Aldine Publishing, p. 34.)

According to Bohm, biology had to overcome, or at least supplement, the mechanism metaphor if it was to advance. It was not enough to state that anything outside mechanical processes was “random,” for the concept of randomness was too ill-defined to constitute an adequate description of phenomena that did not fit into the mechanism metaphor. For one thing, noted Bohm, the word “random” was often used to denote “disorder,” when in fact it was impossible for a phenomenon to have no order whatsoever. Nor did unpredictability imply randomness — Bohm pointed out that the notes of a musical composition are not predictable, but nonetheless have a precise order when considered in totality. (Ibid., p. 20)

Bohm’s alternative conceptualization was that of an open order, that is, an order that consisted of multiple potential sub-orders or outcomes. For example, if you roll a single die once, there are six possible outcomes and each outcome is equally likely. But the die is not disordered; in fact, it is a precisely ordered system, with equal length dimensions on all sides of the cube and a weight equally distributed throughout the cube. (This issue is discussed in How Random is Evolution?) However, unlike the roll of a die, life is both open to new possibilities and capable of retaining previous outcomes, resulting in increasingly complex orders, orders that are nonetheless still open to change.

Although we are inclined to think of reality as composed of “things,” Bohm argued that the fundamental reality of the universe was not “things” but change: “All is process. That is to say, there is no thing in the universe. Things, objects, entities, are abstractions of what is relatively constant from a process of movement and transformation. They are like the shapes that children like to see in the clouds . . . .” (“Further Remarks on Order,” Ibid., p. 42) The British biologist C.H. Waddington, commenting on Bohm, proposed another metaphor, borrowed from the ancient Judeo-Christian sectarian movement known as Gnosticism:

‘Things’ are essentially eggs — pregnant with God-knows-what. You look at them and they appear simple enough, with a bland definite shape, rather impenetrable. You glance away for a bit and when you look back what you find is that they have turned into a fluffy yellow chick, actively running about and all set to get imprinted on you if you will give it half a chance. Unsettling, even perhaps a bit sinister. But one strand of Gnostic thought asserted that _everything_ is like that. (C.H. Waddington, “The Practical Consequences of Metaphysical Beliefs on a Biologist’s Work,” Ibid., p. 73)

Bohm adds that although the mechanism metaphor is apt to make one think of nature as an engineer or the work of an engineer (i.e., the universe as a “clock”), it could be more useful to think of nature as an artist. Bohm compares nature to a young child beginning to draw. Such a child attempting to draw a rectangle for the first time is apt to end up with a drawing that resembles random or nearly-random lines. Over time however, the child gathers visual impressions and instructions from parents, teachers, books, and toys of what shapes are and what a rectangle is; over time, with growth and practice, the child learns to draw a reasonably good rectangle. (Bohm, “Further Remarks on Order, Ibid., pp. 48-50) It is an order that appears to be the outcome of randomness, but in fact emerges from an open order of multiple possibilities.

 

The American microbiologist Carl. W. Woese (1928-2012), who achieved honors and awards for his discovery of a third domain of life, the “archaea,” also rejected the use of mechanist perspectives in biology. In an article calling for a “new biology,” Woese argued that biology borrowed too much from physics, focusing on the smallest parts of nature while lacking a holistic perspective:

Let’s stop looking at the organism purely as a molecular machine. The machine metaphor certainly provides insights, but these come at the price of overlooking much of what biology is. Machines are not made of parts that continually turn over, renew. The organism is. Machines are stable and accurate because they are designed and built to be so. The stability of an organism lies in resilience, the homeostatic capacity to reestablish itself. While a machine is a mere collection of parts, some sort of “sense of the whole” inheres in the organism, a quality that becomes particularly apparent in phenomena such as regeneration in amphibians and certain invertebrates and in the homeorhesis exhibited by developing embryos.

If they are not machines, then what are organisms? A metaphor far more to my liking is this. Imagine a child playing in a woodland stream, poking a stick into an eddy in the flowing current, thereby disrupting it. But the eddy quickly reforms. The child disperses it again. Again it reforms, and the fascinating game goes on. There you have it! Organisms are resilient patterns in a turbulent flow—patterns in an energy flow. A simple flow metaphor, of course, fails to capture much of what the organism is. None of our representations of organism capture it in its entirety. But the flow metaphor does begin to show us the organism’s (and biology’s) essence. And it is becoming increasingly clear that to understand living systems in any deep sense, we must come to see them not materialistically, as machines, but as (stable) complex, dynamic organization. (“A New Biology for a New Century,” Microbiology and Molecular Biology Reviews, June 2004, pp. 175-6)

A swirling pattern of water is perhaps not entirely satisfactory as a metaphoric conceptualization of life, but it does point to an aspect of reality that the mechanism metaphor does not satisfactorily capture: the ability of life to adapt.

Woese proposes another metaphor to describe what life was like in the very early stages of evolution, when primitive single-celled organisms were all that existed: a community. In this stage, cellular organization was minimal, and many important functions evolved separately and imperfectly in different cellular organisms. However, these organisms could evolve by exchanging genes, in a process called Horizontal Gene Transfer (HGT). This was the primary factor in very early evolution, not random mutation. According to Woese:

The world of primitive cells feels like a vast sea, or field, of cosmopolitan genes flowing into and out of the evolving cellular (and other) entities. Because of the high level of HGT [horizontal gene transfer], evolution at this stage would in essence be communal, not individual. The community of primitive evolving biological entities as a whole as well as the surrounding field of cosmopolitan genes participates in a collective reticulate [i.e., networked] evolution. (Ibid., p. 182)

It was only later that this loose community of cells increased their interactions to the point at which a phase transition took place, in which evolution became less communal and the vertical inheritance of relatively well-developed organisms became the main form of evolutionary descent. But horizontal gene transfer still continued after this transition, and continues to this day. (Ibid., pp. 182-84) It’s hard to see how these interactions resemble any kind of mechanism.

Tree of life showing vertical and horizontal gene transfers.

Source:  Horizontal gene transfer – Wikipedia

 

_____________________________

So let’s return to the question of “vitalism,” the old theory that there was something special responsible for life: a soul, spirit, force, or substance. The old theories of vitalism have been abandoned on the grounds that no one has been able to observe, identify, or measure a soul, spirit, etc. However, the dissatisfaction of many biologists with the “mechanist” outlook has led to a new conception of vitalism, one in which the essence of life is not in a mysterious substance or force but in the organization of matter and energy, and the processes that occur under this organization. (See Sebastian Normandin and Charles T. Wolfe, eds., Vitalism and the Scientific Image in Post-Enlightenment Life Science, 1800-2010, p. 2n4, 69, 277, 294 )

As Woese wrote, organisms are “resilient patterns . . . in an energy flow.” In a previous essay, I pointed to the work of the great physicist Werner Heisenberg, who noted that matter and energy are essentially interchangeable and that the universe itself began as a great burst of energy, much of which gradually evolved into different forms of matter over time. According to Heisenberg, “Energy is in fact the substance from which all elementary particles, all atoms and therefore all things are made. . . .” (Physics and Philosophy, p. 63)

Now energy itself is not a personal being, and while energy can move things, it’s problematic to equate any moving matter as a kind of life. But is it not the case that once a particular configuration of energy/matter rises to a certain level, organized under a unified consciousness with a free will, then that configuration of energy/matter constitutes a spirit or soul? In this view, there is no vitalist “substance” that gives life to matter — it is simply a matter of energy/matter reaching a certain level of organization capable of (at least minimal) consciousness and free will.

In this view, when ancient peoples thought that breath was the spirit of life and blood was the sacred source of life, they were not that far off the mark. Oxygen is needed by (most) life forms to process the energy in food. Without the continual flow of oxygen from our environment into our body, we die. (Indeed, brain damage will occur after only three minutes without oxygen.) And blood delivers the oxygen and nutrients to the cells that compose our body. Both breath and blood maintain the flow of energy that is essential to life. It’s all a matter of organized energy/matter, with billions of smaller actors and activities working together to form a unified conscious being.

The Metaphor of “Mechanism” in Science

The writings of science make frequent use of the metaphor of “mechanism.” The universe is conceived as a mechanism, life is a mechanism, and even human consciousness has been described as a type of mechanism. If a phenomenon is not an outcome of a mechanism, then it is random. Nearly everything science says about the universe and life falls into the two categories of mechanism and random chance.

The use of the mechanism metaphor is something most of us hardly ever notice. Science, allegedly, is all about literal truth and precise descriptions. Metaphors are for poetry and literature. But in fact mathematics and science use metaphors. Our understandings of quantity, space, and time are based on metaphors derived from our bodily experiences, as George Lakoff and Rafael Nunez have pointed out in their book Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being  Theodore L. Brown, a professor emeritus of chemistry at the University of Illinois at Urbana-Champaign, has provided numerous examples of scientific metaphors in his book, Making Truth: Metaphor in Science. Among these are the “billiard ball” and “plum pudding” models of the atom, as well as the “energy landscape” of protein folding. Scientists envision cells as “factories” that accept inputs and produce goods. The genetic structure of DNA is described as having a “code” or “language.” The term “chaperone proteins” was invented to describe proteins that have the job of assisting other proteins to fold correctly.

What I wish to do in this essay is closely examine the use of the mechanism metaphor in science. I will argue that this metaphor has been extremely useful in advancing our knowledge of the natural world, but its overuse as a descriptive and predictive model has led us down the wrong path to fully understanding reality — in particular, understanding the actual nature of life.

____________________________

Thousands of years ago, human beings attributed the actions of natural phenomena to spirits or gods. A particular river or spring or even tree could have its own spirit or minor god. Many humans also believed that they themselves possessed a spirit or soul which occupied the body, gave the body life and motion and intelligence, and then departed when the body died. According to the Bible, Genesis 2:7, when God created Adam from the dust of the ground, God “breathed into his nostrils the breath of life; and man became a living soul.” Knowing very little of biology and human anatomy, early humans were inclined to think that spirit/breath gave life to material bodies; and when human bodies no longer breathed, they were dead, so presumably the “spirit” went someplace else. The ancient Hebrews also saw a role for blood in giving life, which is why they regarded blood as sacred. Thus, the Hebrews placed many restrictions on the consumption and handling of blood when they slaughtered animals for sacrifice and food. These views about the spiritual aspects of breath and blood are also the historical basis of “vitalism,” the theory that life consists of more than material parts, and must somehow be based on a vital principle, spark, or force, in addition to matter. 

The problem with the vitalist outlook is that it did not appreciably advance our knowledge of nature and the human body.  The idea of a vital principle or force was too vague and could not be tested or measured or even observed. Of course, humans did not have microscopes thousands of years ago, so we could not see cells and bacteria, much less atoms.

By the 17th century, thinkers such as Thomas Hobbes and Rene Descartes proposed that the universe and even life forms were types of mechanisms, consisting of many parts that interacted in such a way as to result in predictable patterns. The universe was often analogized to a clock. (The first mechanical clock was developed around 1300 A.D., but water clocks, based on the regulated flow of water, have been in use for thousands of years.) The great French scientist Pierre-Simon Laplace was an enthusiast for the mechanist viewpoint and even argued that the universe could be regarded as completely determined from its beginnings:

We may regard the present state of the universe as the effect of the past and the cause of the future. An intellect which at any given moment knew all of the forces that animate nature and the mutual positions of the beings that compose it, if this intellect were vast enough to submit the data to analysis, could condense into a single formula the movement of the greatest bodies of the universe and that of the lightest atom; for such an intellect nothing could be uncertain and the future just like the past would be present before its eyes. (A Philosophical Essay on Probabilities, Chapter Two)

Laplace’s radical determinism was not embraced by all scientists, but it was a common view among many scientists. Later, as the science of biology developed, it was argued that the evolution of life was not as determined as the motion of the planets. Rather, random genetic mutations resulted in new life forms and “natural selection” determined that fit life forms flourished and reproduced, while unfit forms died out. In this view, physical mechanisms combined with random chance explained evolution.

The astounding advances in physics and biology in the past centuries certainly seem to justify the mechanism metaphor. Reality does seem to consist of various parts that interact in predictable cause-and-effect patterns. We can predict the motions of objects in space, and build technologies that send objects in the right direction and speed to the right target. We can also methodically trace illnesses to a dysfunction in one or more parts of the body, and this dysfunction can often be treated by medicine or surgery.

But have we been overusing the mechanism metaphor? Does reality consist of nothing but determined and predictable cause-and-effect patterns with an element of random chance mixed in?

I believe that we can shed some light on this subject by first examining what mechanisms are — literally — and then examine what resemblances and differences there are between mechanisms and the actual universe, between mechanisms and actual life.

____________________

 

Even in ancient times, human beings created mechanisms, from clocks to catapults to cranes to odometers. The Antikythera mechanism of ancient Greece, constructed around 100 B.C., was a sophisticated mechanism with over 30 gears that was able to predict astronomical motions and is considered to be one of the earliest computers. Below is a photo of a fragment of the mechanism, discovered in an ocean shipwreck in 1901:

 

Over subsequent centuries, human civilization created steam engines, propeller-driven ships, automobiles, airplanes, digital watches, computers, robots, nuclear reactors, and spaceships.

So what do most or all of these mechanisms have in common?

  1. Regularity and Predictability. Mechanisms have to be reliable. They have to do exactly what you want every time. Clocks can’t run fast, then run slow; automobiles can’t unilaterally change direction or speed; nuclear reactors can’t overheat on a whim; computers have to give the right answer every time. 
  2. Precision. The parts that make up a mechanism must fit together and move together in precise ways, or breakdown, or even disaster, will result. Engineering tolerances are typically measured in millimeters.
  3. Stability and Durability. Mechanisms are often made of metal, and for good reason. Metal can endure extreme forces and temperatures, and, if properly maintained, can last for many decades. Metal can slightly expand and contract depending on temperature, and metals can have some flexibility when needed, but metallic constructions are mostly stable in shape and size. 
  4. Unfree/Determined. Mechanisms are built by humans for human purposes. When you manage the controls of a mechanism correctly, the results are predictable. If you get into your car and decide to drive north, you will drive north. The car will not dispute you or override your commands, unless it is programmed to override your commands, in which case it is simply following a different set of instructions. The car has no will of its own. Human beings would not build mechanisms if such mechanisms acted according to their own wills. The idea of a self-willing mechanism is prolific in science fiction, but not in science.
  5. They do not grow. Mechanisms do not become larger over time or change their basic structure like living organisms. This would be contrary to the principle of durability/stability. Mechanisms are made for a purpose, and if there is a new purpose, a new mechanism will be made.
  6. They do not reproduce. Mechanisms do not have the power of reproduction. If you put a mechanism into a resource-rich environment, it will not consume energy and materials and give birth to new mechanisms. Only life has this power. (A partial exception can be made in the case of  computer “viruses,” which are lines of code programmed to duplicate themselves, but the “viruses” are not autonomous — they do the bidding of the programmer.)
  7. Random events lead to the universal degradation of mechanisms, not improvement. According to neo-Darwinism, random mutations in the genes of organisms are what is responsible for evolution; in most cases, mutations are harmful, but in some cases, they lead to improvement, leading to new and more complex organisms, ultimately culminating in human beings. So what kind of random mutations (changes) lead to improved mechanisms? None, really. Mechanisms change over time with random events, but these events lead to degradation of mechanisms, not improvement. Rust sets in, different parts break, electric connections fail, lubricating fluids leak. If you leave a set of carefully-preserved World War One biplanes out in a field, without human intervention, they will not eventually evolve into jet planes and rocket ships. They will just break down. Likewise, electric toasters will not evolve into supercomputers, no matter how many millions of years you wait. Of course, organisms also degrade and die, but they have the power of reproduction, which continues the population and creates opportunities for improvement.

There is one hypothetical mechanism that, if constructed, could mimic actual organisms: a self-replicating machine. Such a machine could conceivably contain plans within itself to gather materials and energy from its environment and use these materials and energy to construct copies of itself, growing exponentially in numbers as more and more machines reproduce themselves. Such machines could even be programmed to “mutate,” creating variations in its descendants. However, no such mechanism has yet been produced. Meanwhile, primitive single-celled life forms on earth have been successfully reproducing for four billion years.

Now, let’s compare mechanisms to life forms. What are the characteristics of life?

  1. Adaptability/Flexibility. The story of life on earth is a story of adaptability and flexibility. The earliest life forms, single cells, apparently arose in hydrothermal vents deep in the ocean. Later, some of these early forms evolved into multi-cellular creatures, which spread throughout the oceans. After 3.5 billion years, fish emerged, and then much later, the first land creatures. Over time, life adapted to different environments: sea, land, rivers, caves, air; and also to different climates, from the steamiest jungles to frozen environments. 
  2. Creativity/Diversification. Life is not only adaptive, it is highly creative and branches into the most diverse forms over time. Today, there are millions of species. Even in the deepest parts of the ocean, life forms thrive in an environment with pressures that would crush most life forms. There are bacteria that can live in water at or near the boiling point. The tardigrade can survive the cold, hostile vacuum of space. The bacteria Deinococcus radiodurans is able to survive extreme forms of radiation by means of one of the most efficient DNA repair capabilities ever seen. Now it’s true that among actual mechanisms there is also a great variety; but these mechanisms are not self-created, they are created by humans and retain their forms unless specifically modified by humans.
  3. Drives toward cooperation / symbiosis. Traditional Darwinist views of evolution see life as competition and “survival of the fittest.” However, more recent theorists of evolution point to the strong role of cooperation in the emergence and survival of advanced life forms. Biologist Lynn Margulis has argued that the most fundamental building block of advanced organisms, the cell, was the result of a merger between more primitive bacteria billions of years ago. By merging, each bacterium lent a particular biological advantage to the other, and created a more advanced life form. This theory was regarded with much skepticism at the time it was proposed, but over time it became widely accepted.  Today, only about half of the human body is made up of human cells — the other half consists of trillions of microbes and quadrillions of viruses that largely live in harmony with human cells. Contrary to the popular view that microbes and viruses are threats to human beings, most of these microbes and viruses are harmless or even beneficial to humans. Microbes are essential in digesting food and synthesizing vitamins, and even the human immune system is partly built and partly operated by microbes!  By contrast, the parts of a mechanism don’t naturally come together to form the mechanism; they are forced together by their manufacturer.
  4. Growth. Life is characterized by growth. All life forms begin with either a single cell, or the merger of two cells, after which a process of repeated division begins. In multicellular organisms, the initial cell eventually becomes an embryo; and when that embryo is born, becoming an independent life form, it continues to grow. In some species, that life form develops into an animal that can weigh hundreds or even thousands of pounds. This, from a microscopic cell! No existing mechanism is capable of that kind of growth.
  5. Reproduction. Mechanisms eventually disintegrate, and life forms die. But life forms have the capability of reproducing and making copies of themselves, carrying on the line. In an environment with adequate natural resources, the number of life forms can grow exponentially. Mechanisms have not mastered that trick.
  6. Free will/choice. Mechanisms are either under direct human control, are programmed to do certain things, or perform in a regular pattern, such as a clock. Life forms, in their natural settings, are free and have their own purposes. There are some regular patterns — sleep cycles, mating seasons, winter migration. But the day-to-day movements and activities of life forms are largely unpredictable. They make spur-of-the-moment decisions on where to search for food, where to find shelter, whether to fight or flee from predators, and which mate is most acceptable. In fact, the issue of mate choice is one of the most intriguing illustrations of free will in life forms — there is evidence that species may select mates for beauty over actual fitness, and human egg cells even play a role in selecting which sperm cells will be allowed to penetrate them.
  7. Able to gather energy from its environment. Mechanisms require energy to work, and they acquire such energy from wound springs or weights (in clocks), electrical outlets, batteries, or fuel. These sources of energy are provided by humans in one way or another. But life forms are forced to acquire energy on their own, and even the most primitive life forms mastered this feat billions of years ago. Plants get their energy from the sun, and animals get their energy from plants or other animals. It’s true that some mechanisms, such as space probes, can operate on their own for many years while drawing energy from solar panels. But these panels were invented and produced by humans, not by mechanisms.
  8. Self-organizing. Mechanisms are built, but life forms are self-organizing. Small components join other small components, forming a larger organization; this larger organization gathers together more components. There is a gradual growth and differentiation of functions — digestion, breathing, brain and nervous system, mobility, immune function. Now this process is very, very slow: evolution takes place over hundreds of millions of years. But mechanisms are not capable of self-organization. 
  9. Capacity for healing and self-repair. When mechanisms are broken, or not working at full potential, a human being intervenes to fix the mechanism. When organisms are injured or infected, they can self-repair by initiating multiple processes, either simultaneously or in stages: immune cells fight invaders; blood cells clot in open wounds to stop bleeding; dead tissues and cells are removed by other cells; and growth hormones are released to begin the process of building new tissue. As healing nears completion, cells originally sent to repair the wound are removed or modified. Now self-repair is not always adequate, and organisms die all the time from injury or infection. But they would die much sooner, and probably a species would not persist at all, without the means of self-repair. Even the existing medications and surgery that modern science has developed largely work with and supplement the body’s healing capacities — after all, surgery would be unlikely to work in most cases without the body’s means of self-repair after the surgeon completes cutting and sewing.

______________________

 

The mechanism metaphor served a very useful purpose in the history of science, by spurring humanity to uncover the cause-and-effect patterns responsible for the motions of stars and planets and the biological functions of life. We can now send spacecraft to planets; we can create new chemicals to improve our lives; we now know that illness is the result of a breakdown in the relationship between the parts of a living organism; and we are getting better and better in figuring out which human parts need medication or repair, so that lifespans and general health can be extended.

But if we are seeking the broadest possible understanding of what life is, and not just the biological functions of life, we must abandon the mechanism metaphor as inadequate and even deceptive. I believe the mechanism metaphor misses several major characteristics of life:

  1. Change. Whether it is growth, reproduction, adaptation, diversification, or self-repair, life is characterized by change, by plasticity, flexibility, and malleability. 
  2. Self-Driven Progress. There is clearly an overall improvement in life forms over time. Changes in species may take place over millions or billions of years, but even so, the differences between a single-celled animal and contemporary multicellular creatures are astonishingly large. It is not just a question of “complexity,” but of capability. Mammals, reptiles, and birds have senses, mobility, and intelligence that single-celled creatures do not have.
  3. Autonomy and freedom. Although some scientists are inclined to think of living creatures, including humans, as “gene machines,” life forms can’t be easily analogized to pre-programmed machines. Certainly, life forms have goals that they pursue — but the pursuit of these goals in an often hostile environment requires numerous spur-of-the-moment decisions that do not lead to the predictable outcomes we expect of mechanisms.

Robert Pirsig, author of Zen and the Art of Motorcycle Maintenance, argues in Lila that the fundamental nature of life is its ability to move away from mechanistic patterns, and science has overlooked this fact because scientists consider it their job to look for mechanisms:

Mechanisms are the enemy of life. The more static and unyielding the mechanisms are, the more life works to evade them or overcome them. The law of gravity, for example, is perhaps the most ruthlessly static pattern of order in the universe. So, correspondingly, there is no single living thing that does not thumb its nose at that law day in and day out. One could almost define life as the organized disobedience of the law of gravity. One could show that the degree to which an organism disobeys this law is a measure of its degree of evolution. Thus, while the simple protozoa just barely get around on their cilia, earthworms manage to control their distance and direction, birds fly into the sky, and man goes all the way to the moon. . . .  This would explain why patterns of life [in evolution] do not change solely in accord with causative ‘mechanisms’ or ‘programs’ or blind operations of physical laws. They do not just change valuelessly. They change in ways that evade, override and circumvent these laws. The patterns of life are constantly evolving in response to something ‘better’ than that which these laws have to offer. (Lila, 1991 hardcover edition, p. 143)

But if the “mechanism” metaphor is inadequate, what are some alternative conceptualizations and metaphors that can retain the previous advances of science while deepening our understanding and helping us make new discoveries? I will discuss this issue in the next post.

Next: Beyond the “Mechanism” Metaphor in Biology

 

Knowledge without Reason

Is it possible to gain real and valuable knowledge without using reason? Many would scoff at this notion. If an idea can’t be defended on rational grounds, it is either a personal preference that may not be held by others or it is false and irrational. Even if one acknowledges a role for intuition in human knowledge, how can one trust another person’s intuition if that person does not provide reasons for his or her beliefs?

In order to address this issue, let’s first define “reason.” The Encyclopedia Britannica defines reason as “the faculty or process of drawing logical inferences,” that is, the act of developing conclusions through logic. Britannica adds, “Reason is in opposition to sensation, perception, feeling, desire, as the faculty . . .  by which fundamental truths are intuitively apprehended.” The New World Encyclopedia defines reason as “the ability to form and operate upon concepts in abstraction, in accordance with rationality and logic. ” Wikipedia states: “Reason is the capacity of consciously making sense of things, applying logic, and adapting or justifying practices, institutions, and beliefs based on new or existing information.”

Fundamental to all these definitions is the idea that knowledge must be based on explicit concepts and statements, in the form of words, symbols, or mathematics. Since human language is often ambiguous, with different definitions for the same word (I could not even find a single, widely-accepted definition of “reason” in standard reference texts), many intellectuals have believed that mathematics, science, and symbolic logic are the primary means of acquiring the most certain knowledge.

However, there are types of knowledge not based on reason. These types of knowledge are difficult or impossible to express in explicit concepts and statements, but we know that they are types of knowledge because they lead to successful outcomes. In these cases, we don’t know how exactly a successful outcome was reached — that remains a black box. But we can judge that the knowledge is worthwhile by the actor’s success in achieving that outcome. There are at least six types of non-rational knowledge:

 

1. Perceptual knowledge

In a series of essays in the early twentieth century, the American philosopher William James drew a distinction between “percepts” and “concepts.” According to James, originally all human beings, like the lower life forms, gathered information from their environment in the form of perceptions and sensations (“percepts”). It was only later in human evolution that human beings created language and mathematics, which allowed them to form concepts. These concepts categorized and organized the findings from percepts, allowing communication between different humans about their perceptual experiences and facilitating the growth of reason. In James’s words, “Feeling must have been originally self-sufficing; and thought appears as a super-added function, adapting us to a wider environment than that of which brutes take account.” (William James, “Percept and Concept – The Import of Concepts“).

All living creatures have perceptual knowledge. They use their senses and brains, however primitive, to find shelter, find and consume food, evade or fight predators, and find a suitable mate. This perceptual knowledge is partly biologically ingrained and partly learned (habitual), but it is not the conceptual knowledge that reason uses. As James noted, “Conception is a secondary process, not indispensable to life.” (Percept and Concept – The Abuse of Concepts)

Over the centuries, concepts became predominant in human thinking, but James argued that both percepts and concepts were needed to fully know reality. What concepts offered humans in the form of breadth, argued James, it lost in depth. It is one thing to know the categorical concepts “desire,” “fear,” “joy,” and “suffering,” ; it is quite another to actually experience desire, fear, joy, and suffering. Even relatively objective categories such as “water,” “stars,” “trees,” “fire,” and so forth are nearly impossible to adequately describe to someone who has not seen or felt these phenomena. Concepts had to be related to particular percepts in the real world, concluded James, or they were merely empty abstractions.

In fact, most of the other non-rational types of knowledge I am about to describe below appear to be types of perceptual knowledge, insofar as they involve perceptions and sensations in making judgments. But I have broken them out into separate categories for purposes of clarity and explanation.

 

2. Emotional knowledge

In a previous post, I discussed the reality of emotional knowledge by pointing to the studies of Professor of Neuroscience Antonio Damasio (see Descartes’ Error: Emotion, Reason, and the Human Brain). Damasio studied a number of human subjects who had lost the part of their brain responsible for emotions, whether due to an accident or a brain tumor. According to Damasio, these subjects experienced a marked decline in their competence and decision-making capability after losing their emotional capacity, even though their IQs remained above-normal. They did not lose their intellectual ability, but their emotions. And that made all the difference. They lost their ability to make good decisions, to effectively manage their time, and to navigate relationships with other human beings. Their competence diminished and their productivity at work plummeted.

Why was this? According to Damasio, when these subjects lost their emotional capacity, they also lost their ability to value. And when they lost their ability to value, they lost their capacity to assign different values to the options they faced every day, leading to either a paralysis in decision-making or to repeatedly misplaced priorities, focusing on trivial tasks rather than important tasks.

Now it’s true that merely having emotions does not guarantee good decisions. We all know of people who make poor decisions because they have anger management problems, they suffer from depression, or they seem to be addicted to risk-taking. The trick is to have the right balance or disposition of emotions. Consequently, a number of scientists have attempted to formulate “EQ” tests to measure persons’ emotional intelligence.

 

3. Common life / culture

People like to imagine that they think for themselves, and this is indeed possible — but only to a limited extent. We are all embedded in a culture, and this culture consists of knowledge and practices that stretch back hundreds or thousands of years. The average English-language speaker has a vocabulary of tens of thousands of words. So how many of those words has a typical person invented? In most cases, none – every word we use is borrowed from our cultural heritage. Likewise, every concept we employ, every number we add or subtract, every tradition we follow, every moral rule we obey is transmitted to us down through the generations. If we invent a new word that becomes widely adopted, if we come up with an idea that is both completely original and worthy, that is a very rare event indeed.

You may argue, “This may well be true. But you know perfectly well that cultures, or the ‘common life’ of peoples are also filled with superstition, with backwardness, and barbarism. Moreover, these cultures can and do change over time. The use of reason, from the most intelligent people in that culture, has overcome many backward and barbarous practices, and has replaced superstition with science.” To which, I reply, “Yes, but very few people actually have original and valuable contributions to knowledge, and their contributions are often few and in specialized fields. Even these creative geniuses must take for granted most of the culture they have lived in. No one has the time or intelligence to create a plan for an entirely new society. The common life or culture of a society is a source of wisdom that cannot be done away with entirely.”

This is essentially the insight of the eighteenth century philosopher David Hume. According to Hume, philosophers are tempted to critique all the common knowledge of society as being unfounded in reason and to begin afresh with pure deductive logic, as did Descartes.  But this can only end in total skepticism and nihilism. Rather, argues Hume, “true philosophy” must work within the common life. As Donald W. Livingstone, a former professor at Emory University, has explained:

Hume defines ‘true philosophy’ as ‘reflections on common life methodized and corrected.’ . . . The error of philosophy, as traditionally conceived—and especially modern philosophy—is to think that abstract rules or ideals gained from reflection are by themselves sufficient to guide conduct and belief. This is not to say abstract rules and ideals are not needed in critical thinking—they are—but only that they cannot stand on their own. They are abstractions or stylizations from common life; and, as abstractions, are indeterminate unless interpreted by the background prejudices of custom and tradition. Hume follows Cicero in saying that ‘custom is the great guide of life.’ But custom understood as ‘methodized and corrected’ by loyal and skillful participants. (“The First Conservative,” The American Conservative, August 10, 2011)

 

4. Tacit knowledge / Intuition

Is it possible to write a perfect manual on how to ride a bicycle, one that successfully instructs a child on how to get on a bicycle for the first time and ride it perfectly? What about a perfect cookbook, one that turns a beginner into a master chef upon reading it? Or what about reading all the books in the world about art — will that give someone what they need to create great works of art? The answer to all of these questions is of course, “no.” One must have actual experience in these activities. Knowing how to do something is definitely a form of knowledge — but it is a form of knowledge that is difficult or impossible to transmit fully through a set of abstract rules and instructions. The knowledge is intuitive and habitual. Your brain and central nervous system make minor adjustments in response to feedback every time you practice an activity, until you master it as well as you can. When you ride a bike, you’re not consciously implementing a set of explicit rules inside your head, you’re carrying out an implicit set of habits learned in childhood. Obviously, talents vary, and practice can only take us so far. Some people have a natural disposition to be great athletes or artists or chefs. They can practice the same amount as other people and yet leap ahead of the rest.

The British philosopher Gilbert Ryle famously drew a distinction between two forms of knowledge: “knowing how” and “knowing that.” “Knowing how” is a form of tacit knowledge and precedes “knowing that,” i.e., knowing an explicit set of abstract propositions. Although we can’t fully express tacit knowledge in language, symbolic logic, or mathematics, we know it exists, because people can and will do better at certain activities by learning and practicing. But they are not simply absorbing abstract propositions — they are immersing themselves in a community, they are working alongside a mentor, and they are practicing with the guidance of the community and mentor. And this method of learning how also applies to learning how to reason in logic and mathematics. Ryle has pointed out that it is possible to teach a student everything there is to know about logical proofs — and that student may be able to fully understand others’ logical proofs. And yet when it comes to doing his or her own logical proofs, that student may completely fail. The student knows that but does not know how.

A recent article on the use of artificial intelligence in interpreting medical scans points out that it is virtually impossible for humans to be fully successful in interpreting medical scans simply by applying a set of rules. The people who were best at diagnosing medical scans were not applying rules but engaging in pattern recognition, an activity that requires talent and experience but can’t be fully learned in a text. Many times when expert diagnosticians are asked how they came to a certain conclusion, they have difficulty describing their method in words — they may say a certain scan simply “looks funny.” One study described in the article concluded that pattern recognition uses a part of the brain responsible for naming things:

‘[A] process similar to naming things in everyday life occurs when a physician promptly recognizes a characteristic and previously known lesion,’ the researchers concluded. Identifying a lesion was a process similar to naming the animal. When you recognize a rhinoceros, you’re not considering and eliminating alternative candidates. Nor are you mentally fusing a unicorn, an armadillo, and a small elephant. You recognize a rhinoceros in its totality—as a pattern. The same was true for radiologists. They weren’t cogitating, recollecting, differentiating; they were seeing a commonplace object.

Oddly enough, it appears to be possible to teach computers implicit knowledge of medical scans. A computing strategy known as a “neural network” attempts to mimic the human brain by processing thousands or millions of patterns that are fed into the computer. If the computer’s answer is correct, the connection responsible for that answer is strengthened; if the answer is incorrect, that connection is weakened. Over time, the computer’s ability to arrive at the correct answer increases. But there is no set of rules, simply a correlation built up over thousands and thousands of scans. The computer remains a “black box” in its decisions.

 

5. Creative knowledge

It is one thing to absorb knowledge — it is quite another to create new knowledge. One may attend school for 15 or 20 years and diligently apply the knowledge learned throughout his or her career, and yet never invent anything new, never achieve any significant new insight. And yet all knowledge was created by various persons at one point in the past. How is this done?

As with emotional knowledge, creative knowledge is not necessarily an outcome of high intelligence. While creative people generally have an above-average IQ, the majority of creative people do not have a genius-level IQ (upper one percent of the population). In fact, most geniuses do not make significant creative contributions. The reason for this is that new inventions and discoveries are rarely an outcome of logical deduction but of a “free association” of ideas that often occurs when one is not mentally concentrating at all. Of note, creative people themselves cannot precisely describe how they get their ideas. The playwright Neil Simon once said, “I don’t write consciously . . . I slip into a state that is apart from reality.” According to one researcher, “[C]reative people are better at recognizing relationships, making associations and connections, and seeing things in an original way — seeing things that others cannot see.” Moreover, this “free association” of ideas actually occurs most effectively while a person is at rest mentally: drifting off to sleep, taking a bath or shower, or watching television.

Mathematics is probably the most precise and rigorous of disciplines, but mathematical discovery is so mysterious that mathematicians themselves have compared their insights to mysticism. The great French mathematician Henri Poincare believed that the human mind worked subliminally on problems, and his work habit was to spend no more than two hours at a time working on mathematics. Poincare believed that his subconscious would continue working on problems while he conducted other activities, and indeed, many of his great discoveries occurred precisely when he was away from his desk. John von Neumann, one of the best mathematicians of the twentieth century, also believed in the subliminal mind. He would sometimes go to sleep with a mathematical problem on his mind and wake up in the middle of the night with a solution. Reason may be used to confirm or disconfirm mathematical discoveries, but it is not the source of the discoveries.

 

6. The Moral Imagination

Where do moral rules come from? Are they handed down by God and communicated through the sacred texts — the Torah, the Bible, the Koran, etc.? Or can morals be deduced by using pure reason, or by observing nature and drawing objective conclusions, they same way that scientists come to objective conclusions about physics and chemistry and biology?

Centuries ago, a number of philosophers rejected religious dogma but came to the conclusion that it is a fallacy to suppose that reason is capable of creating and defending moral rules. These philosophers, known as the “sentimentalists,” insisted that human emotions were the root of all morals. David Hume argued that reason in itself had little power to motivate us to help others; rather sympathy for others was the root of morality. Adam Smith argued that the basis of sympathy was the moral imagination:

As we have no immediate experience of what other men feel, we can form no idea of the manner in which they are affected, but by conceiving what we ourselves should feel in the like situation. Though our brother is upon the rack, as long as we ourselves are at our ease, our senses will never inform us of what he suffers. They never did, and never can, carry us beyond our own person, and it is by the imagination only that we can form any conception of what are his sensations. . . . It is the impressions of our own senses only, not those of his, which our imaginations copy. By the imagination we place ourselves in his situation, we conceive ourselves enduring all the same torments, we enter as it were into his body, and become in some measure the same person with him, and thence form some idea of his sensations, and even feel something which, though weaker in degree, is not altogether unlike them. His agonies, when they are thus brought home to ourselves, when we have thus adopted and made them our own, begin at last to affect us, and we then tremble and shudder at the thought of what he feels. (The Theory of Moral Sentiments, Section I, Chapter I)

Adam Smith recognized that it was not enough to sympathize with others; those who behaved unjustly, immorally, or criminally did not always deserve sympathy. One had to make judgments about who deserved sympathy. So human beings imagined “a judge between ourselves and those we live with,” an “impartial and well-informed spectator” by which one could make moral judgments. These two imaginations — of sympathy and of an impartial judge — are the real roots of morality for Smith.

__________________________

 

This brings us to our final topic: the role of non-rational forms of knowledge within reason itself.

Aristotle is regarded as the founding father of logic in the West, and his writings on the subject are still influential today. Aristotle demonstrated a variety of ways to deduce correct conclusions from certain premises. Here is one example that is not from Aristotle, but which has been used as an example of Aristotle’s logic:

All men are mortal. (premise)

Socrates is a man. (premise)

Therefore, Socrates is mortal. (conclusion)

The logic is sound, and the conclusion follows from the premises. But this simple example was not at all typical of most real-life puzzles that human beings faced. And there was an additional problem.

If one believed that all knowledge had to be demonstrated through logical deduction, that rule had to be applied to the premises of the argument as well. Because if the premises were wrong, the whole argument was wrong. And every argument had to begin with at least one premise. Now one could construct another argument proving the premise(s) of the first argument — but then the premises of the new argument also had to be demonstrated, and so forth, in an infinite regress.

To get out of this infinite regress, some argued that deduced conclusions could support premises in the same way as the premises supported a conclusion, a type of circular support. But Aristotle rejected this argument as incoherent. Instead, Aristotle offered an argument that to this day is regarded as difficult to interpret.

According to Aristotle, there is another cognitive state, known as “nous.” It is difficult to find an English equivalent of this word, and the Greeks themselves seemed to use different meanings, but the word “nous” has been translated as “insight,” “intuition,” or “intelligence.” According to Aristotle, nous makes it possible to know certain things immediately without going through a process of argument or logical deduction. Aristotle compares this power to perception, noting that we have the power to discern different colors with our eyesight even without being taught what colors are. It is an ingrained type of knowledge that does not need to be taught. In other words, nous is a type of non-rational knowledge — tacit, intuitive, and direct, not requiring concepts!

Is Truth a Type of Good?

[T]ruth is one species of good, and not, as is usually supposed, a category distinct from good, and co-ordinate with it. The true is the name of whatever proves itself to be good in the way of belief. . . .” – William James,  “What Pragmatism Means

Truth is a static intellectual pattern within a larger entity called Quality.” – Robert Prisig, Lila

 

Does it make sense to think of truth as a type of good? The initial reaction of most people to this claim is negative, sometimes strongly so. Surely what we like and what is true are two different things. The reigning conception of truth is known as the “correspondence theory of truth,” which argues simply that in order for a statement to be true it must correspond to reality. In this view, the words or concepts or claims we state must match real things or events, and match them exactly, whether those things are good or not.

The American philosopher William James (1842-1910) acknowledged that our ideas must agree with reality in order to be true. But where he parted company with most of the rest of the world was in what it meant for an idea to “agree.” In most cases, he argued, ideas cannot directly copy reality. According to James, “of many realities our ideas can only be symbols and not copies. . . . Any idea that helps us to deal, whether practically or intellectually, with either the reality or its belongings, that doesn’t entangle our progress in frustrations, that fits, in fact, and adapts our life to the reality’s whole setting, will agree sufficiently to meet the requirement.” He also argued that “True ideas are those we can assimilate, validate, corroborate, and verify.” (“Pragmatism’s Conception of Truth“) Many years later, Robert Pirsig argued in Zen and the Art of Motorcycle Maintenance and Lila that the truths of human knowledge, including science, were developed out of an intuitive sense of good or “quality.”

But what does this mean in practice? Many truths are unpleasant, and reality often does not match our desires. Surely truth should correspond to reality, not what is good.

One way of understanding what James and Pirsig meant is to examine the origins and development of language and mathematics. We use written language and mathematics as tools to make statements about reality, but the tools themselves do not merely “copy” or even strictly correspond to reality. In fact, these tools should be understood as symbolic systems for communication and understanding. In the earliest stages of human civilization, these symbolic systems did try to copy or correspond to reality; but the strict limitations of “corresponding” to reality was in fact a hindrance to the truth, requiring new creative symbols that allowed knowledge to advance.

 

_______________________________

 

The first written languages consisted of pictograms, that is, drawn depictions of actual things — human beings, stars, cats, fish, houses. Pictograms had one big advantage: by clearly depicting the actual appearance of things, everyone could quickly understand them. They were the closest thing to a universal language; anyone from any culture could understand pictograms with little instruction.

However, there were some pretty big disadvantages to the use of pictograms as a written language. Many of the things we all see in everyday life can be clearly communicated through drawings. But there are a lot of ideas, actions, abstract concepts, and details that are not so easily communicated through drawings. How does one depict activities such as running, hunting, fighting, and falling in love, while making it clear that one is communicating an activity and not just a person? How does one depict a tribe, kingdom, battle, or forest, without becoming bogged down in drawing pictograms of all the persons and objects involved? How does one depict attributes and distinguish between specific types of people and specific types of objects? How does one depict feelings, emotions, ideas, and categories? Go through a dictionary at random sometime and see how many words can be depicted in a clear pictogram. There are not many. There is also the problem of differences in artistic ability and the necessity of maintaining standards. Everyone may have a different idea of what a bird looks like and different abilities in drawing a bird.

These limitations led to an interesting development in written language: over hundreds or thousands of years, pictograms became increasingly abstract, to the point at which their form did not copy or correspond to what they represented at all. This development took place across civilizations, as seen is this graphic, in which the top pictograms represent the earliest forms and the bottom ones coming later:

(Source: Wikipedia, https://en.wikipedia.org/wiki/History_of_writing)

Eventually, pictograms were abandoned by most civilizations altogether in favor of alphabets. By using combinations of letters to represent objects and ideas, it became easier for people to learn how to read and write. Instead of having to memorize tens of thousands of pictograms, people simply needed to learn new combinations of letters/sounds. No artistic ability was required.

One could argue that this development in writing systems does not address the central point of the correspondence theory of truth, that a true statement must correspond to reality. In this theory, it is perfectly OK for an abstract symbol to represent something. If someone writes “I caught a fish,” it does not matter if the person draws a fish or uses abstract symbols for a fish, as long as this person, in reality, actually did catch a fish. From the pragmatic point of view, however, the evolution of human symbolic systems toward abstraction is a good illustration of pragmatism’s main point: by making our symbolic systems better, human civilizations were able to communicate more, understand more, educate more, and acquire more knowledge. Pictograms fell short in helping us “deal with reality,” and that’s why written language had to advance above and beyond pictograms.

 

Let us turn to mathematics. The earliest humans were aware of quantities, but tended to depicted quantities in a direct and literal manner. For small quantities, such as two, the ancient Egyptians would simply draw two pictograms of the object. Nothing could correspond to reality better than that. However, for larger quantities, it was hard, tedious work to draw the same pictogram over and over. So early humans used tally marks or hash marks to indicate quantities, with “four” represented as four distinct marks:  | | | | and then perhaps a symbol or pictogram of the object. Again, these earliest depictions of numbers were so simple and direct, the correspondence to reality so obvious, that they were easily understood by people from many different cultures.

In retrospect, tally marks appear to be very primitive and hardly a basis for a mathematical system. However, I argue that tally marks were actually a revolutionary advance in how human beings understood quantities — because for the first time, quantity became an abstraction disconnected from particular objects. One did not have to make distinctions between three cats, three kings, or three bushels of grain; the quantity “three” could be understood on its own, without reference to what it was representing. Rather than drawing three cats, three kings, or three bushels of grain, one could use | | |  to represent any group of three objects.

The problem with tally marks, of course, was that this system could not easily handle large quantities or permit complex calculations. So, numerals were invented. The ancient Egyptian numeral system used tally marks for numbers below ten, but then used other symbols for larger quantities: ten, hundred, thousand, and so forth.

The ancient Roman numeral system also evolved out of tally marks, with | | | or III representing “three,” but with different symbols for five (V), ten (X), fifty (L), hundred (C), five hundred (D), and thousand (M). Numbers were depicted by writing the largest numerical symbols on the left and the smallest to the right, adding the symbols together to get the quantity (example: 1350 = MCCCL); a smaller numerical symbol to the left of a larger numerical symbol required subtraction (example: IX = 9). As with the Egyptian system, Roman numerals were able to cope with large numbers, but rather than the more literal depiction offered by tally marks, the symbols were a more creative interpretation of quantity, with implicit calculations required for proper interpretation of the number.

The use of numerals by ancient civilizations represented a further increase in the abstraction of quantities. With numerals, one could make calculations of almost any quantity of any objects, even imaginary objects or no objects. Teachers instructed children how to use numerals and how to make calculations, usually without any reference to real-world objects. A minority of intellectuals studied numbers and calculations for many years, developing general theorems about the relationships between quantities. And before long, the power and benefits of mathematics became such that mathematicians became convinced that mathematics were the ultimate reality of the universe, and not the actual objects we once attached to numbers. (On the theory of “mathematical Platonism,” see this post.)

For thousands of years, Roman numerals continued to be used. Rome was able to build and administer a great empire, while using these numerals for accounting, commerce, and engineering. In fact, the Romans were famous for their accomplishments in engineering. It was not until the 14th century that Europe began to discover the virtues of the Hindu-Arabic numeral system. And although it took centuries more, today the Hindu-Arabic system is the most widely-used system of numerals in the world.

Why is this?

The Hindu-Arabic system is noted for two major accomplishments: its positional decimal system and the number zero. The “positional decimal system” simply refers to a base 10 system in which the value of a digit is based upon it’s position. A single numeral may be multiplied by ten or one hundred or one thousand, depending on its position in the number. For example, the number 832 is:  8×100 + 3×10 + 2. We generally don’t notice this, because we spent years in school learning this system, and it comes to us automatically that the first digit “8” in 832 means 8 x 100. Roman numerals never worked this way. The Romans grouped quantities in symbols representing ones, fives, tens, fifties, one hundreds, etc. and added the symbols together. So the Roman version of 832 is DCCCXXXII (500 + 100 + 100 + 100 + 10+ 10 + 10 + 1 + 1).

Because the Roman numeral system is additive, adding Roman numbers is easy — you just combine all the symbols. But multiplication is harder, and division is even harder, because it’s not so easy to take apart the different symbols. In fact, for many calculations, the Romans used an abacus, rather than trying to write everything down. The Hindu-Arabic system makes multiplication and division easy, because every digit, depending on its placement, is a multiple of 1, 10, 100, 1000, etc.

The invention of the positional decimal system took thousands of years, not because ancient humans were stupid, but because symbolizing quantities and their relationships in a way that is useful is actually hard work and requires creative interpretation. You just don’t look at nature and say, “Ah, there’s the number 12, from the positional decimal system!”

In fact, even many of the simplest numbers took thousands of years to become accepted. The number zero was not introduced to Europe until the 11th century and it took several more centuries for zero to become widely used. Negative numbers did not appear in the west until the 15th century, and even then, they were controversial among the best mathematicians until the 18th century.

The shortcomings of seeing mathematical truths as a simple literal copying of reality become even clearer when one examines the origins and development of weights and measures. Here too, early human beings started out by picking out real objects as standards of measurement, only to find them unsuitable in the long run. One of the most well-known units of measurement in ancient times was the cubit, defined as the length of a man’s forearm from elbow to the tip of the middle finger. The foot was defined as the length of a man’s foot. The inch was the width of a man’s thumb. A basic unit of weight was the grain, that is, a single grain of barley or wheat. All of these measures corresponded to something real, but the problem, of course, was that there was a wide variation in people’s body parts, and grains could also vary in weight. What was needed was standardization; and it was not too long before governing authorities began to establish common standards. In many places throughout the world, authorities agreed that a single definition of each unit, based on a single object kept in storage, would be the standard throughout the land. The objects chosen were a matter of social convention, based upon convenience and usefulness. Nature or reality did not simply provide useful standards of measurement; there was too much variation even among the same types of objects provided by nature.

 

At this point, advocates of the correspondence theory of truth may argue, “Yes, human beings can use a variety of symbolic systems, and some are better than others. But the point is that symbolic systems should all represent the same reality. No matter what mathematical system you use, two plus two should still equal four.”

In response, I would argue that for very simple questions (2+2=4), the type of symbolic system you use will not make a big difference — you can use tally marks, Roman numerals, or Hindu-Arabic numerals. But the type of symbolic system you use will definitely make a difference in how many truths you can uncover and particularly how many complicated truths you can grasp. Without good symbolic systems, many truths will remain forever hidden from us.  As it was, the Roman numeral system was probably responsible for the lack of mathematical accomplishments of the Romans, even if their engineering was impressive for the time. And in any case, the pragmatic theory of truth already acknowledges that truth must agree with reality — it just cannot be a copy of reality. In the words of William James, an ideal symbolic system “helps us to deal, whether practically or intellectually, with either the reality or its belongings . . . doesn’t entangle our progress in frustrations, that fits, in fact, and adapts our life to the reality’s whole setting.”(“Pragmatism’s Conception of Truth“)

What is “Transcendence”?

You may have noticed number of writings on religious topics that make reference to “transcendence” or “the transcendent.” However, the word “transcendence” is usually not very well defined, if it is defined at all. The Catechism of the Catholic Church makes several references to transcendence, but it’s not completely clear what transcendence means other than the infinite greatness of God, and the fact that God is “the inexpressible, the incomprehensible, the invisible, the ungraspable.” For those who value reason and precise arguments, this vagueness is unsatisfying. Astonishingly, the fifteen volume Catholic Encyclopedia (1907-1914) did not even have an entry on “transcendence,” though it did have an entry on “transcendentalism,” a largely secular philosophy with a variety of schools and meanings. (The New Catholic Encyclopedia in 1967 finally did have an entry on “transcendence.”)

The Oxford English Dictionary defines “transcendence” as “the action or fact of transcending, surmounting, or rising above . . . ; excelling, surpassing; also the condition or quality of being transcendent, surpassing eminence or excellence. . . .” The reference to “excellence” is probably key to understanding what “transcendence” is. In my previous essay on ancient Greek religion, I pointed out that areté, the Greek word for “excellence,” was a central idea of Greek culture and one cannot fully appreciate the ancient Greek pagan religion without recognizing that Greek devotion to excellence was central to their religion. The Greeks depicted their gods as human, but with perfect physical forms. And while the behavior of the Greek gods was often dubious from a moral standpoint, the Greek gods were still regarded as the givers of wisdom, order, justice, love, and all the institutions of human civilization.

The odd thing about transcendence is that because it seems to refer to a striving for an ideal or a goal that goes above and beyond an observed reality, transcendence has something of an unreal quality. It is easy to see that rocks and plants and stars and animals and humans exist. But the transcendent cannot be directly seen, and one cannot prove the transcendent exists. It is always beyond our reach.

Theologians refer to transcendence as one of the two natures of God, the other being “immanence.” Transcendence refers to the higher nature of God and immanence refers to God as He currently works in reality, i.e., the cosmic order. The division between those who believe in a personal God and those who believe in an impersonal God reflects the division between the transcendent and immanent view of God. It is no surprise that most scientists who believe in God tend more to the view of an impersonal God, because their whole life is dedicated to examining the reality of the cosmic order, which seems to operate according to a set of rules rather than personal supervision.

Of course, atheists don’t even believe in an impersonal God. One famous atheist, Sigmund Freud, argued that religion was an illusion, a simple exercise in “wish fulfillment.” According to Freud, human beings desired love, immortality, and an end to suffering and pain, so they gravitated to religion as a solution to the inevitable problems and limitations of mortal life. Marxists have a similar view of religion, seeing promises of an afterlife as a barrier to improving actual human life.

Another view was taken by the American philosopher George Santayana, whose book, Reason in Religion, is one of the very finest books ever written on the subject of religion. According to Santayana, religion was an imaginative and poetic interpretation of life; religion supplied ideal ends to which human beings could orient their lives. Religion failed only when it attributed literal truth to these imaginative ideal ends. Thus religions should be judged, according to Santayana, according to whether they were good or bad, not whether they were true or false.

This criteria for judging religion would appear to be irrational, both to rationalists and to those who cling to faith. People tend to equate worship of God with belief in God, and often see literalists and fundamentalists as the most devoted of all. But I would argue that worship is the act of submission to ideal ends, which hold value precisely because they are higher than actually existing things, and therefore cannot pass traditional tests of truth, which call for a correspondence to reality.

In essence, worship is submission to a transcendent Good. We see good in our lives all the time, but we know that the particular goods we experience are partial and perishable. Freud is right that we wish for goods that cannot be acquired completely in our lives and that we use our imaginations to project perfect and eternal goods, i.e. God and heaven. But isn’t it precisely these ideal ends that are sacred, not the flawed, perishable things that we see all around us? In the words of Santayana,

[I]n close association with superstition and fable we find piety and spirituality entering the world. Rational religion has these two phases: piety, or loyalty to necessary conditions, and spirituality, or devotion to ideal ends. These simple sanctities make the core of all the others. Piety drinks at the deep, elemental sources of power and order: it studies nature, honours the past, appropriates and continues its mission. Spirituality uses the strength thus acquired, remodeling all it receives, and looking to the future and the ideal. (Reason in Religion, Chapter XV)

People misunderstand ancient Greek religion when they think it is merely a set of stories about invisible personalities who fly around controlling nature and intervening in human affairs. Many Greek myths were understood to be poetic creations, not history; there were often multiple variations of each myth, and people felt free to modify the stories over time, create new gods and goddesses, and change the functions/responsibilities of each god. Rational consistency was not expected, and depictions of the appearance of any god or goddess in statues or painting could vary widely. For the Greeks, the gods were not just personalities, but transcendent forms of the Good. This is why Greek religion also worshipped idealized ends and virtues such as “Peace,” “Victory,” “Love,” “Democracy,” “Health,” “Order,” and “Wealth.” The Greeks represented these idealized ends and virtues as persons (usually females) in statues, built temples for them, and composed worshipful hymns to them. In fact, the tendency of the Greeks to depict any desired end or virtue as a person was so prevalent, it is sometimes difficult for historians to tell if a particular statue or temple was meant for an actual goddess/god or was a personified symbol. For the ancient Greeks, the distinction may not have been that important, for they tended to think in highly poetic and metaphorical terms.

This may be fine as an interpretation of religion, you may say, but does it make sense to conceive of imaginative transcendent forms as persons or spirits who can actually bring about the goods and virtues that we seek? Is there any reason to think that prayer to Athena will make us wise, that singing a hymn to Zeus will help us win a war, or that a sacrifice at the temples of “Peace” or “Health” will bring us peace or health? If these gods are not powerful persons or spirits that can hear our prayers or observe our sacrifices, but merely poetic representations or symbols, then what good are they and what good is worship?

My view is this: worship and prayer do not affect natural causation. Storms, earthquakes, disease, and all the other calamities that have afflicted humankind from the beginning are not affected by prayer. Addressing these calamities requires research into natural causation, planning, human intervention, and technology. What worship and prayer can do, if they are directed at the proper ends, is help us transcend ourselves, make ourselves better people, and thereby make our societies better.

In a previous essay, I reviewed the works of various physicists, who concluded that reality consists not of tiny, solid objects but rather bundles of properties and qualities that emerge from potentiality to actuality. I think this dynamic view of reality is what we need in order to understand the relationship between the transcendent and the actual. We worship the transcendent not because we can prove it exists, but because the transcendent is always drawing us to a higher life, one that excels or supersedes who we already are. The pantheism of Spinoza and Einstein is more rational than traditional myths that attributed natural events to a personal God who created the world in six days and subsequently punished evil by causing natural disasters. But pantheism is ultimately a poor basis for religion. What would be the point of worshipping the law of gravity or electromagnetism or the elements in the periodic table? These foundational parts of the universe are impressive, but I would argue that aspiring to something higher is fundamental not only to human nature but to the universe itself. The universe, after all, began simply with a concentrated point of energy; then space expanded and a few elements such as hydrogen and helium formed; only after hundreds of millions of years did the first stars, planets, and other elements necessary for life began to emerge.

Worshipping the transcendent orients the self to a higher good, out of the immediate here-and-now. And done properly, worship results in worthy accomplishments that improve life. We tend to think of human civilization as being based on the rational mastery of a body of knowledge. But all knowledge began with an imagined transcendent good. The very first lawgivers had no body of laws to study; the first ethicists had no texts on morals to consult; the first architects had no previous designs to emulate; the first mathematicians had no symbols to calculate with; the first musicians had no composers to study. All our knowledge and civilization began with an imagined transcendent good. This inspired experimentation with primitive forms; and then improvement on those initial primitive efforts. Only much later, after many centuries, did the fields of law, ethics, architecture, mathematics, and music become a body of knowledge requiring years of study. So we attribute these accomplishments to reason, forgetting the imaginative leaps that first spurred these fields.

 

Are Human Beings Just Atoms?

In a previous essay on materialism, I discussed the bizarre nature of phenomena on the subatomic level, in which particles have no definite position in space until they are observed. Referencing the works of several physicists and philosophers, I put forth the view that reality consists not of tiny, solid objects but rather bundles of properties and qualities that emerge from potentiality to actuality. In this view, when one breaks down reality into smaller and smaller parts, one does not reach the fundamental units of matter; rather, one is gradually unbundling properties and qualities until the smallest objects no longer even have a definite position in space!

Why is this important? One reason is that the enormous prestige and accomplishments of science have sometimes led us down the wrong path in properly describing and interpreting reality. Science excels at advancing our knowledge of how things work, by breaking down wholes into component parts and manipulating those parts into better arrangements that benefit humanity. This is how we got modern medicine, computers, air conditioning, automobiles, and space travel. However, science sometimes falls short in properly describing and interpreting reality, precisely because it focuses more on the parts than the wholes.

This defect in science becomes particularly glaring when certain scientists attempt to describe what human beings are like. All too often there is a tendency to reduce humans to their component parts, whether these parts are chemical elements (atoms), chemical compounds (molecules), or the much larger molecules known as genes. However, while these component parts make up human beings, there are properties and qualities in human beings that cannot be adequately described in terms of these parts.

Marcelo Gleiser, a physicist at Dartmouth College, argues that “life is the property of a complex network of biochemical reactions . . . a kind of hungry chemistry that is able to duplicate itself.” Biologist Richard Dawkins claims that humans are “just gene machines,” and “living organisms and their bodies are best seen as machines programmed by the genes to propagate those very same genes,” though he qualifies his statement by noting that “there is a very great deal of complication, and indeed beauty in being a gene machine.” Philosopher Daniel Dennett claims that human beings are “moist robots” and the human mind is a collection of computer-like information processes which happen to take place in carbon-based rather than silicon-based hardware.

Now it is true that human beings are composed of atoms that are the basis of chemicals and molecules, that are the basis of chemical compounds, such as genes. The issue, however, is whether describing the parts that compose a human being is the same as describing the whole human being. Yes, human beings are composed of atoms of oxygen, carbon, hydrogen, nitrogen, calcium, and phosphorous. But these atoms can be found in many, many places throughout the universe, in varying quantities and combinations, and they do not have human qualities unless and until they are organized in just the right way. Likewise, genes are ubiquitous in life forms ranging from mammals to lizards to plants to bacteria. Even viruses have genes, though most scientists argue that viruses are not true life forms because they need a host to reproduce. Nevertheless, while human beings share a very few properties and qualities with bacteria and viruses, humans clearly have many properties and qualities that the lower life forms do not.

In fact, recognizing the very difference between life and death can be lost by excessive focus on atoms and molecules. Consider the following: an emergency room doctor treats a patient suffering from a heart attack. Despite the physician’s best efforts, despite all of the doctor’s training and knowledge, the patient dies on the table. So what is the difference between the patient that has died and the patient as he was several hours ago? The quantity and types of atoms composing the body are approximately the same as when the patient was alive. So what has changed? Obviously, the properties and qualities expressed by the organization of the atoms in the human being has changed. The heart no longer supplies blood to the rest of the body, the lungs no longer supply oxygen, the brain no longer has electrical activity, the human being no longer has the ability to run or walk or jump or talk or think or love. Atoms have to be organized in an extremely precise manner in order for these properties and qualities to emerge, and this organization has been lost. So if we are really going to accurately describe what a human being is, we have to refer not just to the atoms, but to the overall organization or form.

The issue of form is what separates the ancient Greek philosophers Democritus and Plato. Both philosophers believed that the universe and everything in it was composed of atoms; but Democritus thought that nothing existed but atoms and the void (space), whereas Plato believed that atoms were arranged by a creator, who, being essentially good, used ideal forms as a blueprint. Contrary to the views of Judaism, Christianity, and Islam, however, Plato believed that the creator was not omnipotent, and was forced to work with imperfect matter to do the best job possible, which is why most created objects and life forms were imperfect and fell short of the ideal forms.

Democritus would no doubt dismiss Plato’s ideal forms as being unreal — after all, forms are not something solid, so how can anything that is not solid, not made of material, exist at all? But as I’ve pointed out, the atoms that compose the human body are found everywhere, whereas actual, living human beings have these same atoms organized in a precise, particular form. In other words, in order to understand anything, it is not enough to break it down into parts and study the parts; one has to look at the whole. The properties and qualities of a living human being, as a whole, definitely do exist, or we would not know how to distinguish a living human being from a dead human being or any other existing thing composed of the same atoms.

The debate between Democritus and Plato points to a difference in ways of knowing that persist to this day: analytic knowledge and holistic knowledge. Analytic knowledge is pursued by science and reason; holistic knowledge is pursued by religion, art, and the humanities. The prestige of science and its technological accomplishments has elevated analytic understanding above all other forms of knowledge, but we remain lost without holistic understanding.

What precisely is “analytic knowledge”? The word “analyze” means “to study or determine the nature and relationship of the parts (of something) by analysis.” Synonyms for “analyze” include “break down,” “cut,” “deconstruct,” and “dissect.” In fact, the word “analysis” is derived from the New Latin word analyein, meaning “to break up.” Analysis is an extremely valuable tool and is responsible for human progress in all sorts of areas. But the knowledge derived from analysis is primarily a description and guide to how things work. It reduces knowledge of the whole to knowledge of the parts, which is fine if you want to take something apart and put it back together. But the knowledge of how things work is not the same as the knowledge of what things are as a whole, what qualities and properties they have, and the value of those qualities and properties. This latter knowledge is holistic knowledge.

The word “holism,” based on the ancient Greek word for “whole” (holos), was coined in the early twentieth century in order to promote the view that all systems, living or not, should be viewed as wholes and not just as a collection of parts or the sum of parts. It’s no accident that the words “whole,” “heal,” healthy,” and “holy” are linguistically related. The problems of sickness, malnutrition, and injury were well-known to the ancients, and it was natural for them to see these problems as a disturbance to the whole human being, rendering a person incomplete and missing certain vital functions. Wholeness was an ideal end, which made wholeness sacred (holy) as well. (For an extended discussion of analytic/reductionist knowledge vs. holistic knowledge, see this post.)

Holistic knowledge is not just about ideal physical health. It’s about ideal forms in all aspects, including the qualities we associate with human beings we admire: wisdom, strength, beauty, courage, love, kindness. As mistaken as religions have been in understanding natural causation, it is the devotion to ideal forms that is really the essence of religion. The ancient Greeks worshipped excellence, as embodied in their gods; Confucians were devoted to family ties and duties; the Jews submitted themselves to the laws of the one God; Christians devoted themselves to the love of God, embodied in Christ.

Holistic knowledge provides no guidance as to how to conduct surgery or build a computer or launch a rocket; but it does provide insight into the ethics of medicine, the desirability or hazards of certain types of technology, and the proper ends of human beings. All too often, contemporary secular societies expect new technologies to improve human lives and pay no heed to ideal human forms, on the assumption that ideal forms are a fantasy. Then we are shocked when the new technologies are abused and not only bring out the worst in human nature but enhance the power of the worst.