The Influence of Christianity on Western Culture, Part Two: Religion and Culture

In my previous post, I addressed the debate between Christians and secular rationalists on the origins of the modern Western idea of human rights, with Christians attributing these rights to Christianity, whereas secular rationalists credited human reason. While acknowledging the crimes committed by the Christian churches in history, I also expressed skepticism about the ability of reason alone to provide a firm foundation for human rights.

In the second part of this essay, I would like to explore the idea that religion has a deep, partly subconscious, influence on culture and that this influence maintains itself even when people stop going to religious services, stop reading religious texts, and even stop believing in God. (Note: Much of what I am about to say next has been inspired by the works of the Christian theologian Reinhold Niebuhr, who has covered this issue in his books, The Nature and Destiny of Man and The Self and the Dramas of History.)

___________________________

What exactly is religion, and why does it have a deep impact on our culture and thinking? Nearly all of today’s major existing religions date back from 1300 to 4000 years ago. In some respects, these religions have changed, but in most of their fundamentals, they have not. As such, there are unattractive elements in all of these religions, originating in primitive beliefs held at a time when there was hardly any truly scientific inquiry. As a guide to history, religious texts from this era are extremely unreliable; as a guide to scientific knowledge of the natural world, these religions are close to useless. So why, then, does religion continue to exercise a hold on the minds of human beings today?

I maintain that religion should be thought of primarily as a Theory of the Good. It is a way of thinking that does not (necessarily) result in truthful journalism and history, does not create accurate theories of causation, and ultimately, cares less about what things are really like and more about what things should be like.

As Robert Pirsig has noted, all life forms seek the Good, if only for themselves. They search for food, shelter, warmth, and opportunities for reproduction. More advanced life forms pursue all these and also may seek long-term companionship, a better location, and a more varied diet. If life forms can fly, they may choose to fly for the joy of it; if they can run fast, they may run for the joy of it.

Human beings have all these qualities, but also one more: with our minds, we can imagine an infinite variety of goods in infinite amounts; this is the source of our endless desires. In addition, our more advanced brains also give us the ability to imagine broadened sympathies beyond immediate family and friends, to nations and to humankind as a whole; this is the source of civilization. Finally, we also gain a curiosity about the origin of the world and ourselves and what our ultimate destiny is, or should be; this is the source of myths and faith. It is these imagined, transcendent goods that are the material for religion. And as a religion develops, it creates the basic concepts and categories by which we interpret the world.

There are many similarities among all the world’s religions in what is designated good and what is designated evil. But there are important differences as well, that have resulted in cultural clashes, sometimes leading to mild disagreements and sometimes escalating into the most vicious of wars. For the purpose of this essay, I am going to avoid the similarities among religions and discuss the differences.

Warning: Adequately covering all of the world’s major religions in a short essay is a hazardous enterprise. My depth of knowledge on this subject is not that great, and I will have to grossly simplify in many cases. I merely ask the reader for tolerance and patience; if you have a criticism, I welcome comments.

The most important difference between the major religions revolves around what is considered to be the highest good. This highest good seems to constitute a fundamental dividing line between the religions that is difficult to bridge. To make it simple, let’s summarize the highest good of each religion in one word:

Judaism – Covenant

Christianity – Love

Islam – Submission (to God)

Buddhism – Nirvana

Hinduism – Moksha

Jainism – Nonviolence (ahimsa)

Confucianism – Ren (Humanity)

Taoism – Wu Wei (inaction)

How does this perception of the highest good affect the nature of a religion?

Judaism: With only about 15 million adherents today, Judaism might appear to be a minor religion — but in fact, it is widely known throughout the world because of its huge influence on Christianity and Islam, which have billions of followers and have borrowed greatly from Judaism. Fundamental to Judaism is the idea of a covenant between God and His people, in which His people would follow the commandments of God, and God in return would bless His people with protection and abundance. This sort of faith faced many challenges over the centuries, as natural disasters and defeat in war were not always closely correlated with moral failings or a breach of the covenant. Nevertheless, the idea that moral behavior brings blessings has sustained the Jews and made them successful in many occupations for thousands of years. The chief disadvantage of Judaism has been its exclusive ties to a particular nation/ethnic group, which has limited its appeal to the rest of the world.

Christianity: Originating in Judaism, Christianity made a decisive break with Judaism under Jesus, and later, St. Paul. This break consisted primarily in recognizing that the laws of the Jews were somehow inadequate in making people good, because it was possible for someone to follow the letter of the law while remaining a very flawed or even terrible human being. Jesus’ denunciations of legalists and hypocrites in the New Testament are frequent and scathing. The way forward out of this, according to Jesus, was to simply love others, without making distinctions of rank, ethnicity, or religion. This original message of Jesus, and his self-sacrifice, inspired many Jews and non-Jews and led to the gradual, but steadily accelerating, growth of this minor sect. The chief flaw in Christianity became apparent hundreds of years after the crucifixion, when this minority sect became socially and politically powerful, and Christians used their new power to violently oppress others. This stark hypocrisy has discredited Christianity in the eyes of many.

Islam: A relatively young monotheistic religion, Islam grew out of the Arabian Peninsula in the seventh century AD. It’s prophet, Muhammad, clearly borrowed from Judaism and Christianity, but rejected the exclusivity of Judaism and the status of Jesus as the son of God. The word “Islam” means submission, but contrary to some commentators, it means submission to God, not to Islam or Muslims, which would be blasphemous. The requirements of Islam are fairly rigorous, requiring prayers five times a day; there is also an extensive body of Islamic law that is relatively strict, though implemented unevenly in Islamic countries today, with Iran and Saudi Arabia being among the strictest. There is no denying that the birth of Islam sparked the growth of a great empire that supported an advanced civilization. In the words of Bernard Lewis, “For many centuries the world of Islam was in the forefront of human civilization and achievement.” (What Went Wrong? The Clash Between Islam and Modernity in the Middle East, p. 3) Today, the Islamic world lags behind the rest of the world in many respects, perhaps because its strict social rules and tradition inhibit innovation in the modern world.

Confucianism. Founded by the scholar and government official Confucius in the 6th century B.C., Confucianism can be regarded as a system of morals based on the concept of Ren, or humanity. Confucius emphasized duty to the family, honesty in government, and espoused a version of the Golden Rule. There is a great deal of debate over whether Confucianism is actually a religion or mainly a philosophy and system of ethics. In fact, Confucius was a practical man, who did not discuss God or the afterlife, and never proclaimed an ability to perform miracles. But his impact on Chinese civilization, and Asian civilization generally, was tremendous, and the values of Confucius are deeply embedded in Chinese and other Asian societies to this day.

Buddhism. Founded by Gautama Buddha in the 6th century B.C., Buddhism addressed the problem of human suffering. In the view of the Buddha, our suffering arises from desire; because we cannot always get what we want, and what we want is never permanent, the human condition is one of perpetual dissatisfaction. When we die, we a born into another body, to suffer again. The Buddha argued that this cycle of suffering and rebirth could be ended by following the “eightfold path” – right understanding, right thought, right speech, right conduct, right livelihood, right effort, right mindfulness, and right concentration. Following this path could lead one to nirvana, which is the extinguishing of the self and the end of the cycle of rebirth and suffering. While not entirely pacifistic, there are strong elements of pacifism in Buddhism and a large number of Buddhists are vegetarian. Many non-Buddhists, however, would dispute the premise that life is suffering and that the dissolving of the self is a solution to suffering.

Hinduism. The third largest religion in the world, Hinduism is also considered to be the world’s oldest religion, with roots stretching back more than 4000 years. However, Hinduism also consists of different schools with diverse beliefs; there are multiple written texts in the Hindu tradition, but no single unifying text, such as the Bible or the Quran. A Hindu can believe in multiple gods or one God, and the Hindu conception of God/s can also vary. There is even a Hindu school of thought that is atheistic; this school goes back thousands of years. There is a strong tradition of nonviolence (ahimsa) in Hinduism, which obviously inspired Gandhi’s campaign of nonviolent resistance against British colonial rule in the early twentieth century. The chief goal of Hindu practices is moksha, or liberation from the cycle of birth, death, and rebirth — roughly similar to the concept of nirvana.

Jainism. Originating in India around 2500 years ago, the Jain religion posits ahimsa, or nonviolence, as the highest good and goal of life. Jains practice a strict vegetarianism, which even extends to certain dairy products which may harm animals and any vegetable that may harm insects if harvested. The other principles of Jainism include anekāntavāda (non-absolutism) and aparigraha (non-attachment). The principle of non-absolutism recognizes that the truth is “many-sided” and impossible to fully express in language, while non-attachment refers to the necessity of avoiding the pursuit of property, taking and keeping only what is necessary.

Taosim. Developed in the 4th century B.C., Taoism is one of the major religions in China, along with Confucianism and Buddhism. “Tao” can be translated as “the Way,” or “the One, which is natural, spontaneous, eternal, nameless, and indescribable. . . the beginning of all things and the way in which all things pursue their course.” Pursuit of the “Way” is not meant to be difficult or arduous or require sacrifice, as in other religions. Rather, the follower must practice wu wei, or effortless action. The idea is that one must act in accord with the cosmos, not fight or struggle against it. Taoism values naturalness, spontaneity, and detachment from desires.

Now, all these religions, including many I have not listed, have value. The monotheism of Judaism and its strict moralism was a stark contrast to the ancient pagan religions, which saw the gods as conflictual, cruel, and prone to immoral behavior. The moral disciplines of Islam invigorated a culture and created a civilization more advanced than the Christian Europe of the Middle Ages. Buddhism, Hinduism, and Jainism have placed strong emphasis on overcoming self-centeredness and rejecting violence. Confucianism has instilled the values of respect for elders, love of family, and love of learning throughout East Asia. Taoism’s emphasis on harmony puts a break on human tendencies to dominate and control.

What I would like to focus on now are the particular contributions Christianity has made to Western civilization and how Christianity has shaped the culture of the West in ways we may not even recognize, contrasting the influence of Christianity with the influence of the other major religions.

__________________________

Christianity has provided four main concepts that have shaped Western culture, concepts that retain their influence today, even among atheists.

(1) The idea of a transcendent good, above and beyond nature and society.

(2) An emphasis on the worth of the individual, above society, government, and nature.

(3) Separation of religion and government.

(4) The idea of a meaningful history, that is, an unfolding story that ends with a conclusion, not a series of random events or cycles.

Let’s examine each of these in detail.

(1) Transcendent Good. I have written in some detail about the concept of transcendence elsewhere. In brief, transcendence refers to “the action of transcending, surmounting, or rising above . . . excelling.” To seek the transcendent is to aspire to something higher than reality. The difficulty with transcendence is that it’s not easily subject to empirical examination:

[B]ecause it seems to refer to a striving for an ideal or a goal that goes above and beyond an observed reality, transcendence has something of an unreal quality. It is easy to see that rocks and plants and stars and animals and humans exist. But the transcendent cannot be directly seen, and one cannot prove the transcendent exists. It is always beyond our reach.

Transcendent religions differ from pantheistic and panentheistic religions by insisting on the greater value or goal of an ideal state of being above and beyond the reality we experience. Since this ideal state is not subject to empirical proof, transcendent religions appear irrational and superstitious to many. Moreover, the dreamy idealism of transcendent religions often results in a fanaticism that leads to intolerance and religious wars. For these reasons, philosophers and scientists in the West usually prefer pantheistic interpretations of God (see Spinoza and Einstein).

The religions of India — Hinduism, Buddhism, Jainism — have strong tendencies toward pantheism or panentheism, in which all existence is bound by a universal spirit, and our duty is to become one with this spirit. There is not a sharp distinction between this universal spirit and the universe or reality itself.

In China, Taoism rejects a personal God, while Confucianism is regarded by most as a philosophy or moral code than a religion. (The rational pragmatism of Chinese religion is probably why China had no major religious wars until a Chinese Christian in the 19th century led a rebellion on behalf of his “Heavenly Kingdom” that lasted 14 years and led to the deaths of tens of millions.)

And yet, there is a disadvantage in the rational pragmatism of Chinese religions — without a dreamy idealism, a culture can stagnate and become too accepting of evils. Chinese novelist Yan Lianke, who is an atheist, has remarked:

In China, the development of religion is the best lens through which to view the health of a society. Every religion, when it is imported to China is secularized. The Chinese are profoundly pragmatic. . . . What is absent in Chinese civilization, what we’ve always lacked, is a sense of the sacred. There is no room for higher principles when we live so firmly in the concrete. The possibility of hope and the aspiration to higher ideals are too abstract and therefore get obliterated in our dark, fierce realism.” (“Yan Lianke’s Forbidden Satires of China,” The New Yorker, 8 Oct 2018)

Now, Christianity is not alone in positing a transcendent good — Judaism and Islam also do this. But there are other particular qualities of Christianity that we must look to as well.

(2) Individual Worth.

To some extent, all religions value the individual human being. Yet, individual worth is central to Christianity in a way that is not found in other religions. The religions of India certainly value human life, and there are strong elements of pacifism in these religions. But these religions also tend to devalue individuality, in the sense that the ultimate goal is to overcome selfhood and merge with a larger spirit. Confucianism emphasizes moral duty, from the lowest members of society to the highest; individual worth is recognized, but the individual is still part of a hierarchy, and serves that hierarchy. In Taoism, the individual submits to the Way. In Islam, the individual submits to God. In Judaism, the idea of a Chosen People elevates one particular group over others (although this group also falls under the severe judgment of God).

Only under Christianity was the individual human being, whatever that person’s background, elevated to the highest worth. Jesus’ teachings on love and forgiveness, regardless of a person’s status and background, became central to Western civilization — though frequently violated in practice. Jesus’s vision of the afterlife emphasized not a merger with a universal spirit, but a continuance of individual life, free of suffering, in heaven.

3. Separation of religion and government.

Throughout history, the relation between religious institutions and government have varied. In some states, religion and government were unified, as in the Islamic caliphate. In most other cases, political authorities were not religious leaders, but priests were part of the ruling class that assisted the rulers. In China, Confucianism played a major role in the administrative bureaucracy, but Confucianism was a mild and rational religion that had no interest in pursuing and punishing heretics. In Judaism, rabbis often had some actual political power, depending on the historical period and location, but their power was never absolute.

Christianity originated with the martyrdom of a powerless man at the hands of an oppressive government and an intolerant society. In subsequent years, this minor sect was persecuted by the Roman empire. This persecution lasted for several hundred years; at no time during this period did Christianity receive the support, approval, or even tolerance of the imperial government.

Few other religions have originated in such an oppressive atmosphere and survived. China absorbed Confucianism, Taoism, and Buddhism without wars and extensive persecution campaigns. Hinduism, Buddhism, and Jainism grew out of the same roots and largely tolerated each other. Islam had its enemies in its early years, but quickly triumphed in a series of military campaigns that built a great empire. Even the Jews, one of the most persecuted groups in history, were able to practice their religion in their own state(s) for hundreds of years before military defeat and diaspora; in 1948, the Jews again regained a state.

Now, it is true that in the 4th century A.D., Christianity became the official state religion of the Roman empire, and the Christian persecution of pagan worshippers began. Over the centuries, the Catholic Church exercised enormous influence over the culture, economy, and politics of Europe. But by the 18th and 19th centuries, the idea of a strict separation between church and state became widely popular, first in America, then in Europe. While Christian churches fought this reduction in Christian political power and influence, the separation of Church and state was at least compatible with the origins of Christianity in persecution and martyrdom, and did not violate the core beliefs of Christianity.

4. A meaningful history.

The idea that history consists of a progressive movement toward an ideal end is not common to all cultures. Ancient Greeks and Romans saw history as a long decline from an original “Golden Age,” or they saw history as essentially cyclical, consisting of a never-ending rise and decline of various civilizations. The historical views of Hinduism, Buddhism, Taoism, and Confucianism were also cyclical.

It was Judaism, Christianity, and Islam that interpreted history as progressing toward an ideal end, a kingdom of heaven. But as a result of the Renaissance in the West, and then the Enlightenment, the idea of an otherworldly kingdom was dumped, and the ideal end of history became secularized. The German philosopher Hegel (1770-1831) interpreted history as a dialectic clash of ideas, moving toward its ultimate end, which was human freedom. (An early enthusiast for the French Revolution, Hegel once referred to Napoleon as the “world soul” on horseback.) Karl Marx took Hegel’s vision one step further, removing Hegel’s idealism and positing a “dialectical materialism” based on class conflict. This class conflict, according to Marx, would one day end in a final, bloody clash that would end class distinctions and bring about the full equality of human beings under communism.

Alas, these dreams of earthly utopia did not come to pass. Napoleon crowned himself emperor in 1804 and went to work creating a new dynasty and aristocracy with which to rule Europe. In the twentieth century, Communist regimes were extraordinarily oppressive everywhere they arose, killing tens of millions of people. Certainly, the idea of human equality was attractive, and political movements arose and took power based on these ideas. Yet the results were bloodshed and tyranny. Even so, when Soviet communism collapsed, the idea of a secular “end of history,” based on the thought of Hegel, became popular again.

According to the American Christian theologian Reinhold Niebuhr, the visions of Hegel and Marx were merely secular versions of Christianity, which failed because, while ostensibly dedicated to the principles of individual worth, equality, and historical progress, they could not overcome the essential fact of human sinfulness. In Christianity, this sinfulness was the basis for the prophecies in the Book of Revelation which foresaw a final battle between good and evil, requiring the intervention of God in order to achieve a final triumph of good.

According to Niebuhr, the fundamental error of all secular ideologies of historical progress was to suppose that the ability of human beings to reason could conquer tendencies to sinfulness in the same way that advances in science could conquer nature. This did not work, in Niebuhr’s view, because reason could be a tool of self-aggrandizement as well as selflessness, and was therefore insufficient to support universal brotherhood. The fundamental truth about human nature, that the Renaissance and the Enlightenment neglected, was that man is an unbreakable organic unity of mind, body, and spirit. Man’s increasing capacity to use reason resulted in new technologies and wealth but did not — and could not — overcome human tendencies to seek power. For this reason, human history was the story of the growth of both good and evil and not the triumph of good over evil. Only the intervention of God, through Christ, could bring the final fulfillment of history. Certainly, belief in this ultimate fulfillment requires a leap of faith — but whether or not one believes the Book of Revelation, it is hard to deny that human dreams of earthly utopia have been frustrated time and time again.

Perhaps at this point, you may agree with my general assessment of Christian ideas, and even find some similarities between Christian ideas and contemporary secular liberalism. Nevertheless, you may also conclude that the causal linkage between Christianity and modern liberalism has not been established. After all, the first modern liberal democracies did not emerge until nearly 1800 years after Christ. Why so long? Why did the Christian churches have such a long record of intolerance and contempt for liberal ideas? Why did the Catholic Church so often ally with monarchs, defend feudalism, and oppose liberal revolutions? Why did various Christian churches tolerate and approve of slavery for hundreds of years? I will address these issues in Part Three.

The Influence of Christianity on Western Culture, Part One: Liberty, Equality, and Human Rights

Does religion have a deep influence on the minds of those living in a largely secular culture, shaping the subconscious beliefs and assumptions of even staunch atheists? Such is the argument of New York Times columnist Ross Douthat, who argues that the contemporary secular liberalism of America and Europe is rooted in the principles of Christianity, and that our civilization suffers when it borrows selectively from Christianity while rejecting the religion as a whole.

Douthat’s provocative claim was challenged by liberal commentators Will Saletan and Julian Sanchez, and if you have time, you can review the three-sided debate here, here, here, here, and here. In brief, Douthat argues the following:

When I look at your secular liberalism, I see a system of thought that looks rather like a Christian heresy, and not necessarily a particularly coherent one at that. In Bad Religion, I describe heresy as a form of belief that tends to emphasize certain elements of the Christian synthesis while downgrading or dismissing other aspects of that whole. And it isn’t surprising that liberalism, which after all developed in a Christian civilization, does exactly that, drawing implicitly on the Christian intellectual inheritance to ground its liberty-equality-fraternity ideals.

Indeed, it’s completely obvious that absent the Christian faith, there would be no liberalism at all. No ideal of universal human rights without Jesus’ radical upending of social hierarchies (including his death alongside common criminals on the cross). No separation of church and state without the gospels’ ‘render unto Caesar’ and St. Augustine’s two cities. No liberal confidence about the march of historical progress without the Judeo-Christian interpretation of history as an unfolding story rather than an endlessly repeating wheel. . . .

And the more purely secular liberalism has become, the more it has spent down its Christian inheritance—the more its ideals seem to hang from what Christopher Hitchens’ Calvinist sparring partner Douglas Wilson has called intellectual ‘skyhooks,’ suspended halfway between our earth and the heaven on which many liberals have long since given up. 

Julian Sanchez, a scholar with the Cato Institute, responds to Douthat by noting that societies don’t need to agree on God and religion to support human rights, only to agree that human rights are good. According to Sanchez, invoking God as the source of goodness doesn’t really solve any problems; at best, it provides one prudential reasons to behave well (i.e., to obtain rewards and avoid punishment in the afterlife). If we believe human rights are good and need to be preserved, the idea of God adds nothing to the belief: “The notion seems to be that someone not (yet) convinced of Christian doctrine would have strong reasons—strong humanistic reasons—to hope for a world in which human dignity and individual rights are respected. But then why aren’t these reasons enough to do the job on their own?” Furthermore, Sanchez argues that morals can be regarded as “normative properties” that are already part of reality, and that secular moralists can appeal to this reality just as easily as believers appeal to God, only normative properties don’t require beliefs about implausible deities and “Middle Eastern folklore.”

Both Douthat and Sanchez make some good arguments, but there are some weaknesses in both sides’ claims that I wish to explore in this extended essay. My view, in brief, is this: Christianity, or any other religion, does not have to be a package deal. Religious claims about various miracles that seem to violate the patterns of nature established by science or the empirical findings of history and archeology should be subject to scrutiny and skepticism like any other claim. Traditional morals that have long-standing religious justifications, from child marriage to slavery, should be subject to the same scrutiny, and rejected when necessary.

And yet, it is difficult to deny the influence of religion on our perceptions — and conceptions — of what is good. I find existing attempts to base human morality and rights solely on reason and science to be unpersuasive; morals are not like the patterns of nature, nor can they be proved by the deductive methods of reason without accepting premises that cannot be proved. While rooted in reality, morals seem to point to something higher than our current reality. And human freedom to choose defies our attempts to prove the existence of morals in the same way that we can prove the deterministic patterns of gravity, chemical reactions, and nuclear fission.

____________________________

Let us consider one such attempt to establish human rights through science and reason by Michael Shermer, director of the The Skeptics Society and founder of Skeptic magazine. In an article for Theology and Science, Shermer attempts to found human rights on reason and science, relying exclusively on “nature and nature’s laws.”

Mr. Shermer begins his essay by noting the many people in Europe that were put to death for the crime of “witchcraft” in the 15th through 17th centuries, and how this witch-hunting hysteria was endorsed by the Catholic Church. Fortunately, notes Mr. Shermer, “scientific naturalism,” the “principle that the world is governed by natural laws and forces that can be understood” and “Enlightenment humanism” arose in the 18th and 19th centuries, destroying the old superstitions of religion. Shermer cites Steven Pinker to explain how the application of scientific naturalism to human affairs provided the principles on which human societies made moral progress:

When a large enough community of free, rational agents confers on how a society should run its affairs, steered by logical consistency and feedback form the world, their consensus will veer in certain directions. Just as we don’t have to explain why molecular biologists discovered that DNA has four bases . . . we may not have to explain why enlightened thinkers would eventually argue against African slavery, cruel punishments, despotic monarchs, and the execution of witches and heretics.

Shermer argues that morals follow logically from reason and observation, and proposes a Principle of Moral Good: “Always act with someone else’s moral good in mind, and never act in a way that it leads to someone else’s moral loss (through force or fraud).”

Unfortunately, this principle, allegedly founded on reason and science, appears to be simply another version of the “Golden Rule,” which has been in existence for over two thousand years, and is found in nearly all the major religions. (The West knows the Golden Rule mainly through Christianity.) None of the religions discovered this rule through science or formal logical deduction. Human rights are not subject to empirical proof like the laws of physics and they don’t follow logically from deductive arguments, unless one begins with premises that support — or even presuppose — the conclusion.

Human rights are a cultural creation. They don’t exist in nature, at least not in a way that we can observe them. To the extent human rights exist, they exist in social practices and laws — sometimes only among a handful of people, sometimes only for certain categories of persons, sometimes widely in society. People can choose to honor and respect human rights, or violate such rights, and do so with impunity.

For this reason, I regard human rights as a transcendent value, something that does not exist in nature, but that many of us regard as worth aspiring to. In a previous essay on transcendence, I noted:

The odd thing about transcendence is that because it seems to refer to a striving for an ideal or a goal that goes above and beyond an observed reality, transcendence has something of an unreal quality. It is easy to see that rocks and plants and stars and animals and humans exist. But the transcendent cannot be directly seen, and one cannot prove the transcendent exists. It is always beyond our reach. . . . We worship the transcendent not because we can prove it exists, but because the transcendent is always drawing us to a higher life, one that excels or supersedes who we already are.

The evils that have human beings have afflicted on other human beings throughout history cannot all be attributed to superstitions and mistaken beliefs, whether about witchcraft or the alleged inferiority of certain races. Far more people have been killed in wars for territory, natural resources, control of trade routes, and for the power to rule than have been killed by accusations of witchcraft. And why not? Is it not compatible with reason to desire wealth and power? The entire basis of economics is that people are going to seek to maximize their wealth. And the basis of modern liberal-democracy is the idea that checks and balances are needed to block excessive power-seeking, that reason itself is insufficient. Historians don’t ask why princes seek to be kings, and why nations seek to expand their territory — it is taken for granted that these desires are inherent to human beings and compatible with reason. As for slavery, it may have been justified by the reference to certain races as inferior, but the pursuit of wealth was the main motivation of slave owners, with the justifications tacked on for appearance’s sake. After all, the fact that states in the American south felt compelled to pass laws forbidding the teaching of blacks indicates that southerners did in fact see blacks as human beings capable of reasoning.

The problem with relying on reason as a basis for human rights is that reason in itself is unable to bridge the gap between desiring one’s own good and desiring the same good for others. It is a highly useful premise in economics and political science that human beings are going to act to maximize their own good. From this premise, many important and useful theories have been developed. Acting for the good of others, on the other hand, particularly when it involves a high degree of self-sacrifice, is extremely variable. It takes place within families, to a limited extent it results in charitable contributions to strangers, and in some cases, soldiers and emergency rescue workers give their lives to save others. But it’s not reason that’s the motivating factor here — it’s love and sympathy and a sense of duty. Reason, on the other hand, is the tool that tells you how much you can give to others without going broke.

Still, it’s one thing to criticize reason as the basis of human rights; it is quite another to provide credit to Christianity for human rights. The historical record of Christianity with regard to human rights is not one that inspires. Nearly all of the Christian churches have been guilty of instigating, endorsing, or tolerating slavery, feudalism, despotism, wars, and torture, for hundreds of years. The record is long and damning.

Still, is it possible that Christianity provided the cultural assumptions, categories, and framework for the eventual flourishing of human rights? After all, neither the American Revolution nor the French Revolution were successful at first in fully implementing human rights. America fought a civil war before slavery was ended, did not allow women to vote until 1920, and did not grant most blacks a consistently recognized right to vote until the 1960s. The French Revolution of 1789 degenerated into terror, dictatorship, and wars of conquest; it took many decades for France to attain a reasonably stable republic. The pursuit of human rights even under secular liberalism was a long, hard struggle, in which ideals only very gradually became fully realized.

This long struggle to implement liberal ideals raises the question: Is it possible that Christianity had a long-term impact on the development of the West that we don’t recognize because we had already absorbed Christian assumptions and premises in our reason and did not question them? This is the question that will be addressed in subsequent posts.

Knowledge without Reason

Is it possible to gain real and valuable knowledge without using reason? Many would scoff at this notion. If an idea can’t be defended on rational grounds, it is either a personal preference that may not be held by others or it is false and irrational. Even if one acknowledges a role for intuition in human knowledge, how can one trust another person’s intuition if that person does not provide reasons for his or her beliefs?

In order to address this issue, let’s first define “reason.” The Encyclopedia Britannica defines reason as “the faculty or process of drawing logical inferences,” that is, the act of developing conclusions through logic. Britannica adds, “Reason is in opposition to sensation, perception, feeling, desire, as the faculty . . .  by which fundamental truths are intuitively apprehended.” The New World Encyclopedia defines reason as “the ability to form and operate upon concepts in abstraction, in accordance with rationality and logic. ” Wikipedia states: “Reason is the capacity of consciously making sense of things, applying logic, and adapting or justifying practices, institutions, and beliefs based on new or existing information.”

Fundamental to all these definitions is the idea that knowledge must be based on explicit concepts and statements, in the form of words, symbols, or mathematics. Since human language is often ambiguous, with different definitions for the same word (I could not even find a single, widely-accepted definition of “reason” in standard reference texts), many intellectuals have believed that mathematics, science, and symbolic logic are the primary means of acquiring the most certain knowledge.

However, there are types of knowledge not based on reason. These types of knowledge are difficult or impossible to express in explicit concepts and statements, but we know that they are types of knowledge because they lead to successful outcomes. In these cases, we don’t know how exactly a successful outcome was reached — that remains a black box. But we can judge that the knowledge is worthwhile by the actor’s success in achieving that outcome. There are at least six types of non-rational knowledge:

 

1. Perceptual knowledge

In a series of essays in the early twentieth century, the American philosopher William James drew a distinction between “percepts” and “concepts.” According to James, originally all human beings, like the lower life forms, gathered information from their environment in the form of perceptions and sensations (“percepts”). It was only later in human evolution that human beings created language and mathematics, which allowed them to form concepts. These concepts categorized and organized the findings from percepts, allowing communication between different humans about their perceptual experiences and facilitating the growth of reason. In James’s words, “Feeling must have been originally self-sufficing; and thought appears as a super-added function, adapting us to a wider environment than that of which brutes take account.” (William James, “Percept and Concept – The Import of Concepts“).

All living creatures have perceptual knowledge. They use their senses and brains, however primitive, to find shelter, find and consume food, evade or fight predators, and find a suitable mate. This perceptual knowledge is partly biologically ingrained and partly learned (habitual), but it is not the conceptual knowledge that reason uses. As James noted, “Conception is a secondary process, not indispensable to life.” (Percept and Concept – The Abuse of Concepts)

Over the centuries, concepts became predominant in human thinking, but James argued that both percepts and concepts were needed to fully know reality. What concepts offered humans in the form of breadth, argued James, it lost in depth. It is one thing to know the categorical concepts “desire,” “fear,” “joy,” and “suffering,” ; it is quite another to actually experience desire, fear, joy, and suffering. Even relatively objective categories such as “water,” “stars,” “trees,” “fire,” and so forth are nearly impossible to adequately describe to someone who has not seen or felt these phenomena. Concepts had to be related to particular percepts in the real world, concluded James, or they were merely empty abstractions.

In fact, most of the other non-rational types of knowledge I am about to describe below appear to be types of perceptual knowledge, insofar as they involve perceptions and sensations in making judgments. But I have broken them out into separate categories for purposes of clarity and explanation.

 

2. Emotional knowledge

In a previous post, I discussed the reality of emotional knowledge by pointing to the studies of Professor of Neuroscience Antonio Damasio (see Descartes’ Error: Emotion, Reason, and the Human Brain). Damasio studied a number of human subjects who had lost the part of their brain responsible for emotions, whether due to an accident or a brain tumor. According to Damasio, these subjects experienced a marked decline in their competence and decision-making capability after losing their emotional capacity, even though their IQs remained above-normal. They did not lose their intellectual ability, but their emotions. And that made all the difference. They lost their ability to make good decisions, to effectively manage their time, and to navigate relationships with other human beings. Their competence diminished and their productivity at work plummeted.

Why was this? According to Damasio, when these subjects lost their emotional capacity, they also lost their ability to value. And when they lost their ability to value, they lost their capacity to assign different values to the options they faced every day, leading to either a paralysis in decision-making or to repeatedly misplaced priorities, focusing on trivial tasks rather than important tasks.

Now it’s true that merely having emotions does not guarantee good decisions. We all know of people who make poor decisions because they have anger management problems, they suffer from depression, or they seem to be addicted to risk-taking. The trick is to have the right balance or disposition of emotions. Consequently, a number of scientists have attempted to formulate “EQ” tests to measure persons’ emotional intelligence.

 

3. Common life / culture

People like to imagine that they think for themselves, and this is indeed possible — but only to a limited extent. We are all embedded in a culture, and this culture consists of knowledge and practices that stretch back hundreds or thousands of years. The average English-language speaker has a vocabulary of tens of thousands of words. So how many of those words has a typical person invented? In most cases, none – every word we use is borrowed from our cultural heritage. Likewise, every concept we employ, every number we add or subtract, every tradition we follow, every moral rule we obey is transmitted to us down through the generations. If we invent a new word that becomes widely adopted, if we come up with an idea that is both completely original and worthy, that is a very rare event indeed.

You may argue, “This may well be true. But you know perfectly well that cultures, or the ‘common life’ of peoples are also filled with superstition, with backwardness, and barbarism. Moreover, these cultures can and do change over time. The use of reason, from the most intelligent people in that culture, has overcome many backward and barbarous practices, and has replaced superstition with science.” To which, I reply, “Yes, but very few people actually have original and valuable contributions to knowledge, and their contributions are often few and in specialized fields. Even these creative geniuses must take for granted most of the culture they have lived in. No one has the time or intelligence to create a plan for an entirely new society. The common life or culture of a society is a source of wisdom that cannot be done away with entirely.”

This is essentially the insight of the eighteenth century philosopher David Hume. According to Hume, philosophers are tempted to critique all the common knowledge of society as being unfounded in reason and to begin afresh with pure deductive logic, as did Descartes.  But this can only end in total skepticism and nihilism. Rather, argues Hume, “true philosophy” must work within the common life. As Donald W. Livingstone, a former professor at Emory University, has explained:

Hume defines ‘true philosophy’ as ‘reflections on common life methodized and corrected.’ . . . The error of philosophy, as traditionally conceived—and especially modern philosophy—is to think that abstract rules or ideals gained from reflection are by themselves sufficient to guide conduct and belief. This is not to say abstract rules and ideals are not needed in critical thinking—they are—but only that they cannot stand on their own. They are abstractions or stylizations from common life; and, as abstractions, are indeterminate unless interpreted by the background prejudices of custom and tradition. Hume follows Cicero in saying that ‘custom is the great guide of life.’ But custom understood as ‘methodized and corrected’ by loyal and skillful participants. (“The First Conservative,” The American Conservative, August 10, 2011)

 

4. Tacit knowledge / Intuition

Is it possible to write a perfect manual on how to ride a bicycle, one that successfully instructs a child on how to get on a bicycle for the first time and ride it perfectly? What about a perfect cookbook, one that turns a beginner into a master chef upon reading it? Or what about reading all the books in the world about art — will that give someone what they need to create great works of art? The answer to all of these questions is of course, “no.” One must have actual experience in these activities. Knowing how to do something is definitely a form of knowledge — but it is a form of knowledge that is difficult or impossible to transmit fully through a set of abstract rules and instructions. The knowledge is intuitive and habitual. Your brain and central nervous system make minor adjustments in response to feedback every time you practice an activity, until you master it as well as you can. When you ride a bike, you’re not consciously implementing a set of explicit rules inside your head, you’re carrying out an implicit set of habits learned in childhood. Obviously, talents vary, and practice can only take us so far. Some people have a natural disposition to be great athletes or artists or chefs. They can practice the same amount as other people and yet leap ahead of the rest.

The British philosopher Gilbert Ryle famously drew a distinction between two forms of knowledge: “knowing how” and “knowing that.” “Knowing how” is a form of tacit knowledge and precedes “knowing that,” i.e., knowing an explicit set of abstract propositions. Although we can’t fully express tacit knowledge in language, symbolic logic, or mathematics, we know it exists, because people can and will do better at certain activities by learning and practicing. But they are not simply absorbing abstract propositions — they are immersing themselves in a community, they are working alongside a mentor, and they are practicing with the guidance of the community and mentor. And this method of learning how also applies to learning how to reason in logic and mathematics. Ryle has pointed out that it is possible to teach a student everything there is to know about logical proofs — and that student may be able to fully understand others’ logical proofs. And yet when it comes to doing his or her own logical proofs, that student may completely fail. The student knows that but does not know how.

A recent article on the use of artificial intelligence in interpreting medical scans points out that it is virtually impossible for humans to be fully successful in interpreting medical scans simply by applying a set of rules. The people who were best at diagnosing medical scans were not applying rules but engaging in pattern recognition, an activity that requires talent and experience but can’t be fully learned in a text. Many times when expert diagnosticians are asked how they came to a certain conclusion, they have difficulty describing their method in words — they may say a certain scan simply “looks funny.” One study described in the article concluded that pattern recognition uses a part of the brain responsible for naming things:

‘[A] process similar to naming things in everyday life occurs when a physician promptly recognizes a characteristic and previously known lesion,’ the researchers concluded. Identifying a lesion was a process similar to naming the animal. When you recognize a rhinoceros, you’re not considering and eliminating alternative candidates. Nor are you mentally fusing a unicorn, an armadillo, and a small elephant. You recognize a rhinoceros in its totality—as a pattern. The same was true for radiologists. They weren’t cogitating, recollecting, differentiating; they were seeing a commonplace object.

Oddly enough, it appears to be possible to teach computers implicit knowledge of medical scans. A computing strategy known as a “neural network” attempts to mimic the human brain by processing thousands or millions of patterns that are fed into the computer. If the computer’s answer is correct, the connection responsible for that answer is strengthened; if the answer is incorrect, that connection is weakened. Over time, the computer’s ability to arrive at the correct answer increases. But there is no set of rules, simply a correlation built up over thousands and thousands of scans. The computer remains a “black box” in its decisions.

 

5. Creative knowledge

It is one thing to absorb knowledge — it is quite another to create new knowledge. One may attend school for 15 or 20 years and diligently apply the knowledge learned throughout his or her career, and yet never invent anything new, never achieve any significant new insight. And yet all knowledge was created by various persons at one point in the past. How is this done?

As with emotional knowledge, creative knowledge is not necessarily an outcome of high intelligence. While creative people generally have an above-average IQ, the majority of creative people do not have a genius-level IQ (upper one percent of the population). In fact, most geniuses do not make significant creative contributions. The reason for this is that new inventions and discoveries are rarely an outcome of logical deduction but of a “free association” of ideas that often occurs when one is not mentally concentrating at all. Of note, creative people themselves cannot precisely describe how they get their ideas. The playwright Neil Simon once said, “I don’t write consciously . . . I slip into a state that is apart from reality.” According to one researcher, “[C]reative people are better at recognizing relationships, making associations and connections, and seeing things in an original way — seeing things that others cannot see.” Moreover, this “free association” of ideas actually occurs most effectively while a person is at rest mentally: drifting off to sleep, taking a bath or shower, or watching television.

Mathematics is probably the most precise and rigorous of disciplines, but mathematical discovery is so mysterious that mathematicians themselves have compared their insights to mysticism. The great French mathematician Henri Poincare believed that the human mind worked subliminally on problems, and his work habit was to spend no more than two hours at a time working on mathematics. Poincare believed that his subconscious would continue working on problems while he conducted other activities, and indeed, many of his great discoveries occurred precisely when he was away from his desk. John von Neumann, one of the best mathematicians of the twentieth century, also believed in the subliminal mind. He would sometimes go to sleep with a mathematical problem on his mind and wake up in the middle of the night with a solution. Reason may be used to confirm or disconfirm mathematical discoveries, but it is not the source of the discoveries.

 

6. The Moral Imagination

Where do moral rules come from? Are they handed down by God and communicated through the sacred texts — the Torah, the Bible, the Koran, etc.? Or can morals be deduced by using pure reason, or by observing nature and drawing objective conclusions, they same way that scientists come to objective conclusions about physics and chemistry and biology?

Centuries ago, a number of philosophers rejected religious dogma but came to the conclusion that it is a fallacy to suppose that reason is capable of creating and defending moral rules. These philosophers, known as the “sentimentalists,” insisted that human emotions were the root of all morals. David Hume argued that reason in itself had little power to motivate us to help others; rather sympathy for others was the root of morality. Adam Smith argued that the basis of sympathy was the moral imagination:

As we have no immediate experience of what other men feel, we can form no idea of the manner in which they are affected, but by conceiving what we ourselves should feel in the like situation. Though our brother is upon the rack, as long as we ourselves are at our ease, our senses will never inform us of what he suffers. They never did, and never can, carry us beyond our own person, and it is by the imagination only that we can form any conception of what are his sensations. . . . It is the impressions of our own senses only, not those of his, which our imaginations copy. By the imagination we place ourselves in his situation, we conceive ourselves enduring all the same torments, we enter as it were into his body, and become in some measure the same person with him, and thence form some idea of his sensations, and even feel something which, though weaker in degree, is not altogether unlike them. His agonies, when they are thus brought home to ourselves, when we have thus adopted and made them our own, begin at last to affect us, and we then tremble and shudder at the thought of what he feels. (The Theory of Moral Sentiments, Section I, Chapter I)

Adam Smith recognized that it was not enough to sympathize with others; those who behaved unjustly, immorally, or criminally did not always deserve sympathy. One had to make judgments about who deserved sympathy. So human beings imagined “a judge between ourselves and those we live with,” an “impartial and well-informed spectator” by which one could make moral judgments. These two imaginations — of sympathy and of an impartial judge — are the real roots of morality for Smith.

__________________________

 

This brings us to our final topic: the role of non-rational forms of knowledge within reason itself.

Aristotle is regarded as the founding father of logic in the West, and his writings on the subject are still influential today. Aristotle demonstrated a variety of ways to deduce correct conclusions from certain premises. Here is one example that is not from Aristotle, but which has been used as an example of Aristotle’s logic:

All men are mortal. (premise)

Socrates is a man. (premise)

Therefore, Socrates is mortal. (conclusion)

The logic is sound, and the conclusion follows from the premises. But this simple example was not at all typical of most real-life puzzles that human beings faced. And there was an additional problem.

If one believed that all knowledge had to be demonstrated through logical deduction, that rule had to be applied to the premises of the argument as well. Because if the premises were wrong, the whole argument was wrong. And every argument had to begin with at least one premise. Now one could construct another argument proving the premise(s) of the first argument — but then the premises of the new argument also had to be demonstrated, and so forth, in an infinite regress.

To get out of this infinite regress, some argued that deduced conclusions could support premises in the same way as the premises supported a conclusion, a type of circular support. But Aristotle rejected this argument as incoherent. Instead, Aristotle offered an argument that to this day is regarded as difficult to interpret.

According to Aristotle, there is another cognitive state, known as “nous.” It is difficult to find an English equivalent of this word, and the Greeks themselves seemed to use different meanings, but the word “nous” has been translated as “insight,” “intuition,” or “intelligence.” According to Aristotle, nous makes it possible to know certain things immediately without going through a process of argument or logical deduction. Aristotle compares this power to perception, noting that we have the power to discern different colors with our eyesight even without being taught what colors are. It is an ingrained type of knowledge that does not need to be taught. In other words, nous is a type of non-rational knowledge — tacit, intuitive, and direct, not requiring concepts!