Knowledge without Reason

Is it possible to gain real and valuable knowledge without using reason? Many would scoff at this notion. If an idea can’t be defended on rational grounds, it is either a personal preference that may not be held by others or it is false and irrational. Even if one acknowledges a role for intuition in human knowledge, how can one trust another person’s intuition if that person does not provide reasons for his or her beliefs?

In order to address this issue, let’s first define “reason.” The Encyclopedia Britannica defines reason as “the faculty or process of drawing logical inferences,” that is, the act of developing conclusions through logic. Britannica adds, “Reason is in opposition to sensation, perception, feeling, desire, as the faculty . . .  by which fundamental truths are intuitively apprehended.” The New World Encyclopedia defines reason as “the ability to form and operate upon concepts in abstraction, in accordance with rationality and logic. ” Wikipedia states: “Reason is the capacity of consciously making sense of things, applying logic, and adapting or justifying practices, institutions, and beliefs based on new or existing information.”

Fundamental to all these definitions is the idea that knowledge must be based on explicit concepts and statements, in the form of words, symbols, or mathematics. Since human language is often ambiguous, with different definitions for the same word (I could not even find a single, widely-accepted definition of “reason” in standard reference texts), many intellectuals have believed that mathematics, science, and symbolic logic are the primary means of acquiring the most certain knowledge.

However, there are types of knowledge not based on reason. These types of knowledge are difficult or impossible to express in explicit concepts and statements, but we know that they are types of knowledge because they lead to successful outcomes. In these cases, we don’t know how exactly a successful outcome was reached — that remains a black box. But we can judge that the knowledge is worthwhile by the actor’s success in achieving that outcome. There are at least six types of non-rational knowledge:

 

1. Perceptual knowledge

In a series of essays in the early twentieth century, the American philosopher William James drew a distinction between “percepts” and “concepts.” According to James, originally all human beings, like the lower life forms, gathered information from their environment in the form of perceptions and sensations (“percepts”). It was only later in human evolution that human beings created language and mathematics, which allowed them to form concepts. These concepts categorized and organized the findings from percepts, allowing communication between different humans about their perceptual experiences and facilitating the growth of reason. In James’s words, “Feeling must have been originally self-sufficing; and thought appears as a super-added function, adapting us to a wider environment than that of which brutes take account.” (William James, “Percept and Concept – The Import of Concepts“).

All living creatures have perceptual knowledge. They use their senses and brains, however primitive, to find shelter, find and consume food, evade or fight predators, and find a suitable mate. This perceptual knowledge is partly biologically ingrained and partly learned (habitual), but it is not the conceptual knowledge that reason uses. As James noted, “Conception is a secondary process, not indispensable to life.” (Percept and Concept – The Abuse of Concepts)

Over the centuries, concepts became predominant in human thinking, but James argued that both percepts and concepts were needed to fully know reality. What concepts offered humans in the form of breadth, argued James, it lost in depth. It is one thing to know the categorical concepts “desire,” “fear,” “joy,” and “suffering,” ; it is quite another to actually experience desire, fear, joy, and suffering. Even relatively objective categories such as “water,” “stars,” “trees,” “fire,” and so forth are nearly impossible to adequately describe to someone who has not seen or felt these phenomena. Concepts had to be related to particular percepts in the real world, concluded James, or they were merely empty abstractions.

In fact, most of the other non-rational types of knowledge I am about to describe below appear to be types of perceptual knowledge, insofar as they involve perceptions and sensations in making judgments. But I have broken them out into separate categories for purposes of clarity and explanation.

 

2. Emotional knowledge

In a previous post, I discussed the reality of emotional knowledge by pointing to the studies of Professor of Neuroscience Antonio Damasio (see Descartes’ Error: Emotion, Reason, and the Human Brain). Damasio studied a number of human subjects who had lost the part of their brain responsible for emotions, whether due to an accident or a brain tumor. According to Damasio, these subjects experienced a marked decline in their competence and decision-making capability after losing their emotional capacity, even though their IQs remained above-normal. They did not lose their intellectual ability, but their emotions. And that made all the difference. They lost their ability to make good decisions, to effectively manage their time, and to navigate relationships with other human beings. Their competence diminished and their productivity at work plummeted.

Why was this? According to Damasio, when these subjects lost their emotional capacity, they also lost their ability to value. And when they lost their ability to value, they lost their capacity to assign different values to the options they faced every day, leading to either a paralysis in decision-making or to repeatedly misplaced priorities, focusing on trivial tasks rather than important tasks.

Now it’s true that merely having emotions does not guarantee good decisions. We all know of people who make poor decisions because they have anger management problems, they suffer from depression, or they seem to be addicted to risk-taking. The trick is to have the right balance or disposition of emotions. Consequently, a number of scientists have attempted to formulate “EQ” tests to measure persons’ emotional intelligence.

 

3. Common life / culture

People like to imagine that they think for themselves, and this is indeed possible — but only to a limited extent. We are all embedded in a culture, and this culture consists of knowledge and practices that stretch back hundreds or thousands of years. The average English-language speaker has a vocabulary of tens of thousands of words. So how many of those words has a typical person invented? In most cases, none – every word we use is borrowed from our cultural heritage. Likewise, every concept we employ, every number we add or subtract, every tradition we follow, every moral rule we obey is transmitted to us down through the generations. If we invent a new word that becomes widely adopted, if we come up with an idea that is both completely original and worthy, that is a very rare event indeed.

You may argue, “This may well be true. But you know perfectly well that cultures, or the ‘common life’ of peoples are also filled with superstition, with backwardness, and barbarism. Moreover, these cultures can and do change over time. The use of reason, from the most intelligent people in that culture, has overcome many backward and barbarous practices, and has replaced superstition with science.” To which, I reply, “Yes, but very few people actually have original and valuable contributions to knowledge, and their contributions are often few and in specialized fields. Even these creative geniuses must take for granted most of the culture they have lived in. No one has the time or intelligence to create a plan for an entirely new society. The common life or culture of a society is a source of wisdom that cannot be done away with entirely.”

This is essentially the insight of the eighteenth century philosopher David Hume. According to Hume, philosophers are tempted to critique all the common knowledge of society as being unfounded in reason and to begin afresh with pure deductive logic, as did Descartes.  But this can only end in total skepticism and nihilism. Rather, argues Hume, “true philosophy” must work within the common life. As Donald W. Livingstone, a former professor at Emory University, has explained:

Hume defines ‘true philosophy’ as ‘reflections on common life methodized and corrected.’ . . . The error of philosophy, as traditionally conceived—and especially modern philosophy—is to think that abstract rules or ideals gained from reflection are by themselves sufficient to guide conduct and belief. This is not to say abstract rules and ideals are not needed in critical thinking—they are—but only that they cannot stand on their own. They are abstractions or stylizations from common life; and, as abstractions, are indeterminate unless interpreted by the background prejudices of custom and tradition. Hume follows Cicero in saying that ‘custom is the great guide of life.’ But custom understood as ‘methodized and corrected’ by loyal and skillful participants. (“The First Conservative,” The American Conservative, August 10, 2011)

 

4. Tacit knowledge / Intuition

Is it possible to write a perfect manual on how to ride a bicycle, one that successfully instructs a child on how to get on a bicycle for the first time and ride it perfectly? What about a perfect cookbook, one that turns a beginner into a master chef upon reading it? Or what about reading all the books in the world about art — will that give someone what they need to create great works of art? The answer to all of these questions is of course, “no.” One must have actual experience in these activities. Knowing how to do something is definitely a form of knowledge — but it is a form of knowledge that is difficult or impossible to transmit fully through a set of abstract rules and instructions. The knowledge is intuitive and habitual. Your brain and central nervous system make minor adjustments in response to feedback every time you practice an activity, until you master it as well as you can. When you ride a bike, you’re not consciously implementing a set of explicit rules inside your head, you’re carrying out an implicit set of habits learned in childhood. Obviously, talents vary, and practice can only take us so far. Some people have a natural disposition to be great athletes or artists or chefs. They can practice the same amount as other people and yet leap ahead of the rest.

The British philosopher Gilbert Ryle famously drew a distinction between two forms of knowledge: “knowing how” and “knowing that.” “Knowing how” is a form of tacit knowledge and precedes “knowing that,” i.e., knowing an explicit set of abstract propositions. Although we can’t fully express tacit knowledge in language, symbolic logic, or mathematics, we know it exists, because people can and will do better at certain activities by learning and practicing. But they are not simply absorbing abstract propositions — they are immersing themselves in a community, they are working alongside a mentor, and they are practicing with the guidance of the community and mentor. And this method of learning how also applies to learning how to reason in logic and mathematics. Ryle has pointed out that it is possible to teach a student everything there is to know about logical proofs — and that student may be able to fully understand others’ logical proofs. And yet when it comes to doing his or her own logical proofs, that student may completely fail. The student knows that but does not know how.

A recent article on the use of artificial intelligence in interpreting medical scans points out that it is virtually impossible for humans to be fully successful in interpreting medical scans simply by applying a set of rules. The people who were best at diagnosing medical scans were not applying rules but engaging in pattern recognition, an activity that requires talent and experience but can’t be fully learned in a text. Many times when expert diagnosticians are asked how they came to a certain conclusion, they have difficulty describing their method in words — they may say a certain scan simply “looks funny.” One study described in the article concluded that pattern recognition uses a part of the brain responsible for naming things:

‘[A] process similar to naming things in everyday life occurs when a physician promptly recognizes a characteristic and previously known lesion,’ the researchers concluded. Identifying a lesion was a process similar to naming the animal. When you recognize a rhinoceros, you’re not considering and eliminating alternative candidates. Nor are you mentally fusing a unicorn, an armadillo, and a small elephant. You recognize a rhinoceros in its totality—as a pattern. The same was true for radiologists. They weren’t cogitating, recollecting, differentiating; they were seeing a commonplace object.

Oddly enough, it appears to be possible to teach computers implicit knowledge of medical scans. A computing strategy known as a “neural network” attempts to mimic the human brain by processing thousands or millions of patterns that are fed into the computer. If the computer’s answer is correct, the connection responsible for that answer is strengthened; if the answer is incorrect, that connection is weakened. Over time, the computer’s ability to arrive at the correct answer increases. But there is no set of rules, simply a correlation built up over thousands and thousands of scans. The computer remains a “black box” in its decisions.

 

5. Creative knowledge

It is one thing to absorb knowledge — it is quite another to create new knowledge. One may attend school for 15 or 20 years and diligently apply the knowledge learned throughout his or her career, and yet never invent anything new, never achieve any significant new insight. And yet all knowledge was created by various persons at one point in the past. How is this done?

As with emotional knowledge, creative knowledge is not necessarily an outcome of high intelligence. While creative people generally have an above-average IQ, the majority of creative people do not have a genius-level IQ (upper one percent of the population). In fact, most geniuses do not make significant creative contributions. The reason for this is that new inventions and discoveries are rarely an outcome of logical deduction but of a “free association” of ideas that often occurs when one is not mentally concentrating at all. Of note, creative people themselves cannot precisely describe how they get their ideas. The playwright Neil Simon once said, “I don’t write consciously . . . I slip into a state that is apart from reality.” According to one researcher, “[C]reative people are better at recognizing relationships, making associations and connections, and seeing things in an original way — seeing things that others cannot see.” Moreover, this “free association” of ideas actually occurs most effectively while a person is at rest mentally: drifting off to sleep, taking a bath or shower, or watching television.

Mathematics is probably the most precise and rigorous of disciplines, but mathematical discovery is so mysterious that mathematicians themselves have compared their insights to mysticism. The great French mathematician Henri Poincare believed that the human mind worked subliminally on problems, and his work habit was to spend no more than two hours at a time working on mathematics. Poincare believed that his subconscious would continue working on problems while he conducted other activities, and indeed, many of his great discoveries occurred precisely when he was away from his desk. John von Neumann, one of the best mathematicians of the twentieth century, also believed in the subliminal mind. He would sometimes go to sleep with a mathematical problem on his mind and wake up in the middle of the night with a solution. Reason may be used to confirm or disconfirm mathematical discoveries, but it is not the source of the discoveries.

 

6. The Moral Imagination

Where do moral rules come from? Are they handed down by God and communicated through the sacred texts — the Torah, the Bible, the Koran, etc.? Or can morals be deduced by using pure reason, or by observing nature and drawing objective conclusions, they same way that scientists come to objective conclusions about physics and chemistry and biology?

Centuries ago, a number of philosophers rejected religious dogma but came to the conclusion that it is a fallacy to suppose that reason is capable of creating and defending moral rules. These philosophers, known as the “sentimentalists,” insisted that human emotions were the root of all morals. David Hume argued that reason in itself had little power to motivate us to help others; rather sympathy for others was the root of morality. Adam Smith argued that the basis of sympathy was the moral imagination:

As we have no immediate experience of what other men feel, we can form no idea of the manner in which they are affected, but by conceiving what we ourselves should feel in the like situation. Though our brother is upon the rack, as long as we ourselves are at our ease, our senses will never inform us of what he suffers. They never did, and never can, carry us beyond our own person, and it is by the imagination only that we can form any conception of what are his sensations. . . . It is the impressions of our own senses only, not those of his, which our imaginations copy. By the imagination we place ourselves in his situation, we conceive ourselves enduring all the same torments, we enter as it were into his body, and become in some measure the same person with him, and thence form some idea of his sensations, and even feel something which, though weaker in degree, is not altogether unlike them. His agonies, when they are thus brought home to ourselves, when we have thus adopted and made them our own, begin at last to affect us, and we then tremble and shudder at the thought of what he feels. (The Theory of Moral Sentiments, Section I, Chapter I)

Adam Smith recognized that it was not enough to sympathize with others; those who behaved unjustly, immorally, or criminally did not always deserve sympathy. One had to make judgments about who deserved sympathy. So human beings imagined “a judge between ourselves and those we live with,” an “impartial and well-informed spectator” by which one could make moral judgments. These two imaginations — of sympathy and of an impartial judge — are the real roots of morality for Smith.

__________________________

 

This brings us to our final topic: the role of non-rational forms of knowledge within reason itself.

Aristotle is regarded as the founding father of logic in the West, and his writings on the subject are still influential today. Aristotle demonstrated a variety of ways to deduce correct conclusions from certain premises. Here is one example that is not from Aristotle, but which has been used as an example of Aristotle’s logic:

All men are mortal. (premise)

Socrates is a man. (premise)

Therefore, Socrates is mortal. (conclusion)

The logic is sound, and the conclusion follows from the premises. But this simple example was not at all typical of most real-life puzzles that human beings faced. And there was an additional problem.

If one believed that all knowledge had to be demonstrated through logical deduction, that rule had to be applied to the premises of the argument as well. Because if the premises were wrong, the whole argument was wrong. And every argument had to begin with at least one premise. Now one could construct another argument proving the premise(s) of the first argument — but then the premises of the new argument also had to be demonstrated, and so forth, in an infinite regress.

To get out of this infinite regress, some argued that deduced conclusions could support premises in the same way as the premises supported a conclusion, a type of circular support. But Aristotle rejected this argument as incoherent. Instead, Aristotle offered an argument that to this day is regarded as difficult to interpret.

According to Aristotle, there is another cognitive state, known as “nous.” It is difficult to find an English equivalent of this word, and the Greeks themselves seemed to use different meanings, but the word “nous” has been translated as “insight,” “intuition,” or “intelligence.” According to Aristotle, nous makes it possible to know certain things immediately without going through a process of argument or logical deduction. Aristotle compares this power to perception, noting that we have the power to discern different colors with our eyesight even without being taught what colors are. It is an ingrained type of knowledge that does not need to be taught. In other words, nous is a type of non-rational knowledge — tacit, intuitive, and direct, not requiring concepts!

How Random is Evolution?

Man is the product of causes which had no prevision of the end they were achieving . . . his origin, his growth, his hopes and fears, his loves and his beliefs, are but the outcome of accidental collocations of atoms. . . .” – Bertrand Russell

In high school or college, you were probably taught that human life evolved from lower life forms, and that evolution was a process in which random mutations in DNA, the genetic code, led to the development of new life forms. Most mutations are harmful to an organism, but some mutations confer an advantage to an organism, and that organism is able to flourish and pass down its genes to subsequent generations –hence, “survival of the fittest.”

Many people reject the theory of evolution because it seemingly removes the role of God in the creation of life and of human beings and suggests that the universe is highly disordered. But all available evidence suggests that life did evolve, that the world and all of its life was not created in six days, as the Bible asserted. Does this mean that human life is an accident, that there is no larger intelligence or purpose to the universe?

I will argue that although evolution does indeed suggest that the traditional Biblical view of life’s origins are incorrect, people have the wrong impression of (1) what randomness in evolution means and (2) how large the role of randomness is in evolution. While it is true that individual micro-events in evolution can be random, these events are part of a larger system, and this system can be highly ordered even if particular micro-events are random. Moreover, recent research in evolution indicates that in addition to random mutation, organisms can respond to environmental factors by changing in a manner that is purposive, not random, in a direction that increases their ability to thrive.

____________________

So what does it mean to say that something is “random”? According to the Merriam-Webster dictionary, “random” means “a haphazard course,” “lacking a definite plan, purpose, or pattern.” Synonyms for “random” include the words “aimless,” “arbitrary,” and “slapdash.” It is easy to see why when people are told that evolutionary change is a random process, that many reject the idea outright. This is not necessarily a matter of unthinking religious prejudice. Anyone who has examined nature and the biology of animals and human beings can’t help but be impressed by how enormously complex and precisely ordered these systems are. The fact of the matter is that it is extraordinarily difficult to build and maintain life; death and nonexistence is relatively easy. But what does it mean to lack “a definite plan, purpose, or pattern”? I contend that this definition, insofar as it applies to evolution, only refers to the particular micro-events of evolution when considered in isolation and not the broader outcome or the sum of the events.

Let me illustrate what I mean by presenting an ordinary and well-known case of randomness: rolling a single die. A die is a cube with six sides and a number, 1-6, on each side. The outcome of any roll of the die is random and unpredictable; if you roll a die once, the outcome will be unpredictable. If you roll a die multiple times, each outcome, as well as the particular sequence of outcomes, will be unpredictable. But if you look at the broader, long-term outcome after 1000 rolls, you will see this pattern: an approximately equal number of ones, twos, threes, fours, fives, and sixes will come up, and the average value of all events will be 3.5.

Why is this? Because the die itself is a highly-precise ordered system. Each die must have equally sided lengths on all sides and an equal distribution of density/weight throughout in order to make the outcome truly unpredictable, otherwise a gambler who knows the design of the die may have an edge. One die manufacturer brags, “With tolerances less than one-third the thickness of a human hair, nothing is left to chance.” [!] In fact, a common method of cheating with dice is to shave one or more sides or insert a weight into one end of the die. This results in a system that is also precisely ordered, but in a way that makes certain outcomes more likely. After a thousand rolls of the die, one or more outcomes will come up more frequently, and this pattern will stand out suspiciously. But the person who cheated by tilting the odds in one direction may have already escaped with his or her winnings.

If you look at how casinos make money, it is precisely by structuring the rules of each game to give the edge to the casino that allows them to make a profit in the long run. The precise outcome of each particular game is not known with certainty, the particular sequence of outcomes is not known, and the balance sheet of the casino at the end of the night cannot be predicted. But there is definitely a pattern: in the long run, the sum of events results in the casino winning and making a profit, while the players as a group will lose money. When casinos go out of business, it is generally because they can’t attract enough customers, not because they lose too many games.

The ability to calculate the sum of a sequence of random events is the basis of the so-called “Monte Carlo” method in mathematics. Basically, the Monte Carlo method involves setting certain parameters, selecting random inputs until the number of inputs is quite large, and then calculating the final result. It’s like throwing darts at a dartboard repeatedly and examining the pattern of holes. One can use this method with 30,000 randomly plotted points to calculate the value of pi to within 0.07 percent.

So if randomness can exist within a highly precise order, what is the larger order within which the random mutations of evolution operate? One aspect of this order is the bonding preferences of atoms, which are responsible not only for shaping how organisms arise, but how organisms eventually develop into astonishingly complex and wondrous forms. Without atomic bonds, structures would fall apart as quickly as they came together, preventing any evolutionary advances. The bonding preferences of atoms shape the parameters of development and result in molecular structures (DNA, RNA, and proteins) that retain a memory or blueprint, so that evolutionary change is incremental. The incremental development of organisms allows for the growth of biological forms that are eventually capable of running at great speeds, flying long distances, swimming underwater, forming societies, using tools, and, in the case of humans, building technical devices of enormous sophistication.

The fact of incremental change that builds upon previous advances is a feature of evolution that makes it more than a random process. This is illustrated by biologist Richard Dawkins’ “weasel program,” a computer simulation of how evolution works by combining random micro-events with the retaining of previous structures so that over time a highly sophisticated order can develop. The weasel program is based on the “infinite monkey theorem,” the fanciful proposal that an infinite number of monkeys with an infinite number of typewriters would eventually produce the works of Shakespeare. This theorem has been used to illustrate how order could conceivably emerge from random and mindless processes. What Dawkins did, however, was write a computer program to write just one sentence from Shakespeare’s Hamlet: “Methinks it is like a weasel.” Dawkins structured the computer program to begin with a single random sentence, reproduce this sentence repeatedly, but add random errors (“mutations”) in each “generation.” If the new sentence was at least somewhat closer to the target phrase “Methinks it is like a weasel,” that sentence became the new parent sentence. In this way, subsequent generations would gradually assume the form of the correct sentence. For example:

Generation 01: WDLTMNLT DTJBKWIRZREZLMQCO P
Generation 02: WDLTMNLT DTJBSWIRZREZLMQCO P
Generation 10: MDLDMNLS ITJISWHRZREZ MECS P
Generation 20: MELDINLS IT ISWPRKE Z WECSEL
Generation 30: METHINGS IT ISWLIKE B WECSEL
Generation 40: METHINKS IT IS LIKE I WEASEL
Generation 43: METHINKS IT IS LIKE A WEASEL

The Weasel program is a great example of how random change can produce order over time, BUT only under highly structured conditions, with a defined goal and a retaining of those steps toward that goal. Without these conditions, a computer program randomly selecting letters would be unlikely to produce the phrase “Methinks it is like a weasel” in the lifetime of the universe, according to Dawkins!

It is the retaining of most evolutionary advances, while allowing a small degree of randomness, that allows evolution to produce increasingly complex life forms. Reproduction has some random elements in it, but is actually remarkably precise and effective in producing offspring at least roughly similar to their parents. It is not the case that a female human is equally as likely to give birth to a dog, a pig, or a chicken as to give birth to a human. It would be very strange indeed if evolution was that random!

But there is even more to the story of evolution.

Recent research in biology has indicated that there are factors in nature that tend to push development in certain directions favorable to an organism’s flourishing. Even if you imagine evolution in nature as a huge casino, with a lot of random events, scientists have discovered that the players are strategizing: they are increasing or decreasing their level of gambling in response to environmental conditions, shaving the dice to obtain more favorable outcomes, and cooperating with each other to cheat the casino!

For example, it is now recognized among biologists that a number of microorganisms are capable to some extent of controlling their rate of mutation, increasing the rate of mutation during times of environmental challenge and stress, and suppressing the rate of mutation during times of peace and abundance. As a result of accelerated mutations, certain bacteria can acquire the ability to utilize new sources of nutrition, overcoming the threat of extinction arising from the depletion of its original food source. In other words, in response to feedback from the environment, organisms can decide to try to preserve as much of their genome as they can or experiment wildly in the hope of finding a solution to new environmental challenges.

The organism known as the octopus (a cephalopod) has a different strategy: it actively suppresses mutation in DNA and prefers to recode its RNA in response to environmental challenges. For example, octopi in the icy waters of the Antarctic recode their RNA in order to keep their nerves firing in cold water. This response is not random but directly adaptive. RNA recoding in octopi and other cephalopods is particularly prevalent in proteins responsible for the nervous system, and it is believed by scientists that this may explain why octopi are among the most intelligent creatures on Earth.

The cephalopods are somewhat unusual creatures, but there is evidence that other organisms can also adapt in a nonrandom fashion to their environment by employing molecular factors that suppress or activate the expression of certain genes — the study of these molecular factors is known as “epigenetics.” For example, every cell in a human fetus has the same DNA, but this DNA can develop into heart tissue, brain tissue, skin, liver, etc., depending on which genes are expressed and which genes are suppressed. The molecular factors responsible for gene expression are largely proteins, and these epigenetic factors can result in heritable changes in response to environmental conditions that are definitely not random.

The water flea, for example, can come in different variations, despite the same DNA, in response to the environmental conditions of the mother flea. If the mother flea experienced a large predator threat, the children of that flea would develop a spiny helmet for protection; otherwise the children would develop normal helmet-less heads. Studies have found that in other creatures, a particular diet can turn certain genes on or off, modifying offspring without changing DNA. In one study, mice that exercised not only enhanced their brain function, their children had enhanced brain function as well, though the effect only lasted one generation if exercise stopped. The Mexican cave fish once had eyes, but in its new dark environment, epigenetics has been responsible for turning off the genes responsible for eye development; its original DNA has been unchanged. (The hypothesized reason for this is that organisms tend to discard traits that are not needed in order to conserve energy.)

Recent studies of human beings have uncovered epigenetic adaptations that have allowed humans to flourish in such varied environments as deserts, jungles, and polar ice. The Oromo people of Ethiopia, recent settlers to the highlands of that country, have had epigenetic changes to their immune system to cope with new microbiological threats. Other populations in Africa have genetic mutations that have the twin effect of protecting against malaria but causing sickle cell anemia — recently it has been found that these mutations are being silenced in the face of declining malarial threats.  Increasingly, scientists are recognizing the large role of epigenetics in the evolution of human beings:

By encouraging the variations and adaptability of our species, epigenetic mechanisms for controlling gene expression have ensured that humanity could survive and thrive in any number of environments. Epigenetics is a significant part of the reason our species has become so adaptable, a trait that is often thought to distinguish us from what we often think of as lesser-evolved and developed animals that we inhabit this earth with. Indeed, it can be argued that epigenetics is responsible for, and provided our species with, the tools that truly made us unique in our ability to conquer any habitat and adapt to almost any climate. (Bioscience Horizons, 1 January 2017)

In fact, despite the hopes of scientists everywhere that the DNA sequencing of the human genome would provide a comprehensive biological explanation of human traits, it has been found that epigenetics may play a larger role in the complexity of human beings than the number of genes. According to one researcher, “[W]e found out that the human genome is probably not as complex and doesn’t have as many genes as plants do. So that, then, made us really question, ‘Well, if the genome has less genes in this species versus this species, and we’re more complex potentially, what’s going on here?'”

One additional nonrandom factor in evolution should be noted: the role of cooperation between organisms, which may even lead to biological mergers that create a new organism. Traditionally, evolution has been thought of primarily as random changes in organisms followed by a struggle for existence between competing organisms. It is a dark view of life. But increasingly, biologists have discovered that cooperation between organisms, known as symbiosis, also plays a role in the evolution of life, including the evolution of human beings.

Why was the role of cooperation in evolution overlooked until relatively recently? A number of biologists have argued that the society and culture of Darwin’s time played a significant role in shaping his theory — in particular, Adam Smith’s book The Wealth of Nations. In Smith’s view, the basic unit of economics was the self-interested individual on the marketplace, who bought and sold goods without any central planner overseeing his activities. Darwin essentially adopted this view and applied it to biological organisms: as businesses competed on the marketplace and flourished or died depending on how efficient they were, so too did organisms struggle against each other, with only the fittest surviving.

However, even in the late nineteenth century, a number of biologists noted cases in nature in which cooperation played a prominent role in evolution. In the 1880s, the Scottish biologist Patrick Geddes proposed that the reason the giant green anemone contained algal (algae) cells as well as animal cells was because of the evolution of a cooperative relationship between the two types of cells that resulted in a merger in which the alagal cells were merged into the animal flesh of the anemone. In the latter part of the twentieth century, biologist Lynn Margulis carried this concept further. Margulis argued that the most fundamental building block of advanced organisms, the cell, was the result of a merger between more primitive bacteria billions of years ago. By merging, each bacterium lent a particular biological advantage to the other, and created a more advanced life form. This theory was regarded with much skepticism at the time it was proposed, but over time it became widely accepted. The traditional picture of evolution as one in which new species diverge from older species and compete for survival has had to be supplemented with the picture of cooperative behavior and mergers. As one researcher has argued, “The classic image of evolution, the tree of life, almost always exclusively shows diverging branches; however, a banyan tree, with diverging and converging branches is best.”

More recent studies have demonstrated the remarkable level of cooperation between organisms that is the basis for human life. One study from a biologist at the University of Cambridge has proposed that human beings have as many as 145 genes that have been borrowed from bacteria, other single-celled organisms, and viruses. In addition, only about half of the human body is made up of human cells — the other half consists of trillions of microbes and quadrillions of viruses that largely live in harmony with human cells. Contrary to the popular view that microbes and viruses are threats to human beings, most of these microbes and viruses are harmless or even beneficial to humans. Microbes are essential in digesting food and synthesizing vitamins, and even the human immune system is partly built and partly operated by microbes! If, as one biologist has argued, each human being is a “society of cells,” it would be equally valid to describe a human being as a “society of cells and microbes.”

Is there randomness in evolution? Certainly. But the randomness is limited in scope, it takes place within a larger order which preserves incremental gains, and it provides the experimentation and diversity organisms need to meet new challenges and new environments. Alongside this randomness are epigenetic adaptations that turn genes on or off in response to environmental influences and the cooperative relations of symbiosis, which can build larger and more complex organisms. These additional facts do not prove the existence of a creator-God that oversees all of creation down to the most minute detail; but they do suggest a purposive order within which an astonishing variety of life forms can emerge and grow.