Knowledge without Reason

Is it possible to gain real and valuable knowledge without using reason? Many would scoff at this notion. If an idea can’t be defended on rational grounds, it is either a personal preference that may not be held by others or it is false and irrational. Even if one acknowledges a role for intuition in human knowledge, how can one trust another person’s intuition if that person does not provide reasons for his or her beliefs?

In order to address this issue, let’s first define “reason.” The Encyclopedia Britannica defines reason as “the faculty or process of drawing logical inferences,” that is, the act of developing conclusions through logic. Britannica adds, “Reason is in opposition to sensation, perception, feeling, desire, as the faculty . . .  by which fundamental truths are intuitively apprehended.” The New World Encyclopedia defines reason as “the ability to form and operate upon concepts in abstraction, in accordance with rationality and logic. ” Wikipedia states: “Reason is the capacity of consciously making sense of things, applying logic, and adapting or justifying practices, institutions, and beliefs based on new or existing information.”

Fundamental to all these definitions is the idea that knowledge must be based on explicit concepts and statements, in the form of words, symbols, or mathematics. Since human language is often ambiguous, with different definitions for the same word (I could not even find a single, widely-accepted definition of “reason” in standard reference texts), many intellectuals have believed that mathematics, science, and symbolic logic are the primary means of acquiring the most certain knowledge.

However, there are types of knowledge not based on reason. These types of knowledge are difficult or impossible to express in explicit concepts and statements, but we know that they are types of knowledge because they lead to successful outcomes. In these cases, we don’t know how exactly a successful outcome was reached — that remains a black box. But we can judge that the knowledge is worthwhile by the actor’s success in achieving that outcome. There are at least six types of non-rational knowledge:

 

1. Perceptual knowledge

In a series of essays in the early twentieth century, the American philosopher William James drew a distinction between “percepts” and “concepts.” According to James, originally all human beings, like the lower life forms, gathered information from their environment in the form of perceptions and sensations (“percepts”). It was only later in human evolution that human beings created language and mathematics, which allowed them to form concepts. These concepts categorized and organized the findings from percepts, allowing communication between different humans about their perceptual experiences and facilitating the growth of reason. In James’s words, “Feeling must have been originally self-sufficing; and thought appears as a super-added function, adapting us to a wider environment than that of which brutes take account.” (William James, “Percept and Concept – The Import of Concepts“).

All living creatures have perceptual knowledge. They use their senses and brains, however primitive, to find shelter, find and consume food, evade or fight predators, and find a suitable mate. This perceptual knowledge is partly biologically ingrained and partly learned (habitual), but it is not the conceptual knowledge that reason uses. As James noted, “Conception is a secondary process, not indispensable to life.” (Percept and Concept – The Abuse of Concepts)

Over the centuries, concepts became predominant in human thinking, but James argued that both percepts and concepts were needed to fully know reality. What concepts offered humans in the form of breadth, argued James, it lost in depth. It is one thing to know the categorical concepts “desire,” “fear,” “joy,” and “suffering,” ; it is quite another to actually experience desire, fear, joy, and suffering. Even relatively objective categories such as “water,” “stars,” “trees,” “fire,” and so forth are nearly impossible to adequately describe to someone who has not seen or felt these phenomena. Concepts had to be related to particular percepts in the real world, concluded James, or they were merely empty abstractions.

In fact, most of the other non-rational types of knowledge I am about to describe below appear to be types of perceptual knowledge, insofar as they involve perceptions and sensations in making judgments. But I have broken them out into separate categories for purposes of clarity and explanation.

 

2. Emotional knowledge

In a previous post, I discussed the reality of emotional knowledge by pointing to the studies of Professor of Neuroscience Antonio Damasio (see Descartes’ Error: Emotion, Reason, and the Human Brain). Damasio studied a number of human subjects who had lost the part of their brain responsible for emotions, whether due to an accident or a brain tumor. According to Damasio, these subjects experienced a marked decline in their competence and decision-making capability after losing their emotional capacity, even though their IQs remained above-normal. They did not lose their intellectual ability, but their emotions. And that made all the difference. They lost their ability to make good decisions, to effectively manage their time, and to navigate relationships with other human beings. Their competence diminished and their productivity at work plummeted.

Why was this? According to Damasio, when these subjects lost their emotional capacity, they also lost their ability to value. And when they lost their ability to value, they lost their capacity to assign different values to the options they faced every day, leading to either a paralysis in decision-making or to repeatedly misplaced priorities, focusing on trivial tasks rather than important tasks.

Now it’s true that merely having emotions does not guarantee good decisions. We all know of people who make poor decisions because they have anger management problems, they suffer from depression, or they seem to be addicted to risk-taking. The trick is to have the right balance or disposition of emotions. Consequently, a number of scientists have attempted to formulate “EQ” tests to measure persons’ emotional intelligence.

 

3. Common life / culture

People like to imagine that they think for themselves, and this is indeed possible — but only to a limited extent. We are all embedded in a culture, and this culture consists of knowledge and practices that stretch back hundreds or thousands of years. The average English-language speaker has a vocabulary of tens of thousands of words. So how many of those words has a typical person invented? In most cases, none – every word we use is borrowed from our cultural heritage. Likewise, every concept we employ, every number we add or subtract, every tradition we follow, every moral rule we obey is transmitted to us down through the generations. If we invent a new word that becomes widely adopted, if we come up with an idea that is both completely original and worthy, that is a very rare event indeed.

You may argue, “This may well be true. But you know perfectly well that cultures, or the ‘common life’ of peoples are also filled with superstition, with backwardness, and barbarism. Moreover, these cultures can and do change over time. The use of reason, from the most intelligent people in that culture, has overcome many backward and barbarous practices, and has replaced superstition with science.” To which, I reply, “Yes, but very few people actually have original and valuable contributions to knowledge, and their contributions are often few and in specialized fields. Even these creative geniuses must take for granted most of the culture they have lived in. No one has the time or intelligence to create a plan for an entirely new society. The common life or culture of a society is a source of wisdom that cannot be done away with entirely.”

This is essentially the insight of the eighteenth century philosopher David Hume. According to Hume, philosophers are tempted to critique all the common knowledge of society as being unfounded in reason and to begin afresh with pure deductive logic, as did Descartes.  But this can only end in total skepticism and nihilism. Rather, argues Hume, “true philosophy” must work within the common life. As Donald W. Livingstone, a former professor at Emory University, has explained:

Hume defines ‘true philosophy’ as ‘reflections on common life methodized and corrected.’ . . . The error of philosophy, as traditionally conceived—and especially modern philosophy—is to think that abstract rules or ideals gained from reflection are by themselves sufficient to guide conduct and belief. This is not to say abstract rules and ideals are not needed in critical thinking—they are—but only that they cannot stand on their own. They are abstractions or stylizations from common life; and, as abstractions, are indeterminate unless interpreted by the background prejudices of custom and tradition. Hume follows Cicero in saying that ‘custom is the great guide of life.’ But custom understood as ‘methodized and corrected’ by loyal and skillful participants. (“The First Conservative,” The American Conservative, August 10, 2011)

 

4. Tacit knowledge / Intuition

Is it possible to write a perfect manual on how to ride a bicycle, one that successfully instructs a child on how to get on a bicycle for the first time and ride it perfectly? What about a perfect cookbook, one that turns a beginner into a master chef upon reading it? Or what about reading all the books in the world about art — will that give someone what they need to create great works of art? The answer to all of these questions is of course, “no.” One must have actual experience in these activities. Knowing how to do something is definitely a form of knowledge — but it is a form of knowledge that is difficult or impossible to transmit fully through a set of abstract rules and instructions. The knowledge is intuitive and habitual. Your brain and central nervous system make minor adjustments in response to feedback every time you practice an activity, until you master it as well as you can. When you ride a bike, you’re not consciously implementing a set of explicit rules inside your head, you’re carrying out an implicit set of habits learned in childhood. Obviously, talents vary, and practice can only take us so far. Some people have a natural disposition to be great athletes or artists or chefs. They can practice the same amount as other people and yet leap ahead of the rest.

The British philosopher Gilbert Ryle famously drew a distinction between two forms of knowledge: “knowing how” and “knowing that.” “Knowing how” is a form of tacit knowledge and precedes “knowing that,” i.e., knowing an explicit set of abstract propositions. Although we can’t fully express tacit knowledge in language, symbolic logic, or mathematics, we know it exists, because people can and will do better at certain activities by learning and practicing. But they are not simply absorbing abstract propositions — they are immersing themselves in a community, they are working alongside a mentor, and they are practicing with the guidance of the community and mentor. And this method of learning how also applies to learning how to reason in logic and mathematics. Ryle has pointed out that it is possible to teach a student everything there is to know about logical proofs — and that student may be able to fully understand others’ logical proofs. And yet when it comes to doing his or her own logical proofs, that student may completely fail. The student knows that but does not know how.

A recent article on the use of artificial intelligence in interpreting medical scans points out that it is virtually impossible for humans to be fully successful in interpreting medical scans simply by applying a set of rules. The people who were best at diagnosing medical scans were not applying rules but engaging in pattern recognition, an activity that requires talent and experience but can’t be fully learned in a text. Many times when expert diagnosticians are asked how they came to a certain conclusion, they have difficulty describing their method in words — they may say a certain scan simply “looks funny.” One study described in the article concluded that pattern recognition uses a part of the brain responsible for naming things:

‘[A] process similar to naming things in everyday life occurs when a physician promptly recognizes a characteristic and previously known lesion,’ the researchers concluded. Identifying a lesion was a process similar to naming the animal. When you recognize a rhinoceros, you’re not considering and eliminating alternative candidates. Nor are you mentally fusing a unicorn, an armadillo, and a small elephant. You recognize a rhinoceros in its totality—as a pattern. The same was true for radiologists. They weren’t cogitating, recollecting, differentiating; they were seeing a commonplace object.

Oddly enough, it appears to be possible to teach computers implicit knowledge of medical scans. A computing strategy known as a “neural network” attempts to mimic the human brain by processing thousands or millions of patterns that are fed into the computer. If the computer’s answer is correct, the connection responsible for that answer is strengthened; if the answer is incorrect, that connection is weakened. Over time, the computer’s ability to arrive at the correct answer increases. But there is no set of rules, simply a correlation built up over thousands and thousands of scans. The computer remains a “black box” in its decisions.

 

5. Creative knowledge

It is one thing to absorb knowledge — it is quite another to create new knowledge. One may attend school for 15 or 20 years and diligently apply the knowledge learned throughout his or her career, and yet never invent anything new, never achieve any significant new insight. And yet all knowledge was created by various persons at one point in the past. How is this done?

As with emotional knowledge, creative knowledge is not necessarily an outcome of high intelligence. While creative people generally have an above-average IQ, the majority of creative people do not have a genius-level IQ (upper one percent of the population). In fact, most geniuses do not make significant creative contributions. The reason for this is that new inventions and discoveries are rarely an outcome of logical deduction but of a “free association” of ideas that often occurs when one is not mentally concentrating at all. Of note, creative people themselves cannot precisely describe how they get their ideas. The playwright Neil Simon once said, “I don’t write consciously . . . I slip into a state that is apart from reality.” According to one researcher, “[C]reative people are better at recognizing relationships, making associations and connections, and seeing things in an original way — seeing things that others cannot see.” Moreover, this “free association” of ideas actually occurs most effectively while a person is at rest mentally: drifting off to sleep, taking a bath or shower, or watching television.

Mathematics is probably the most precise and rigorous of disciplines, but mathematical discovery is so mysterious that mathematicians themselves have compared their insights to mysticism. The great French mathematician Henri Poincare believed that the human mind worked subliminally on problems, and his work habit was to spend no more than two hours at a time working on mathematics. Poincare believed that his subconscious would continue working on problems while he conducted other activities, and indeed, many of his great discoveries occurred precisely when he was away from his desk. John von Neumann, one of the best mathematicians of the twentieth century, also believed in the subliminal mind. He would sometimes go to sleep with a mathematical problem on his mind and wake up in the middle of the night with a solution. Reason may be used to confirm or disconfirm mathematical discoveries, but it is not the source of the discoveries.

 

6. The Moral Imagination

Where do moral rules come from? Are they handed down by God and communicated through the sacred texts — the Torah, the Bible, the Koran, etc.? Or can morals be deduced by using pure reason, or by observing nature and drawing objective conclusions, they same way that scientists come to objective conclusions about physics and chemistry and biology?

Centuries ago, a number of philosophers rejected religious dogma but came to the conclusion that it is a fallacy to suppose that reason is capable of creating and defending moral rules. These philosophers, known as the “sentimentalists,” insisted that human emotions were the root of all morals. David Hume argued that reason in itself had little power to motivate us to help others; rather sympathy for others was the root of morality. Adam Smith argued that the basis of sympathy was the moral imagination:

As we have no immediate experience of what other men feel, we can form no idea of the manner in which they are affected, but by conceiving what we ourselves should feel in the like situation. Though our brother is upon the rack, as long as we ourselves are at our ease, our senses will never inform us of what he suffers. They never did, and never can, carry us beyond our own person, and it is by the imagination only that we can form any conception of what are his sensations. . . . It is the impressions of our own senses only, not those of his, which our imaginations copy. By the imagination we place ourselves in his situation, we conceive ourselves enduring all the same torments, we enter as it were into his body, and become in some measure the same person with him, and thence form some idea of his sensations, and even feel something which, though weaker in degree, is not altogether unlike them. His agonies, when they are thus brought home to ourselves, when we have thus adopted and made them our own, begin at last to affect us, and we then tremble and shudder at the thought of what he feels. (The Theory of Moral Sentiments, Section I, Chapter I)

Adam Smith recognized that it was not enough to sympathize with others; those who behaved unjustly, immorally, or criminally did not always deserve sympathy. One had to make judgments about who deserved sympathy. So human beings imagined “a judge between ourselves and those we live with,” an “impartial and well-informed spectator” by which one could make moral judgments. These two imaginations — of sympathy and of an impartial judge — are the real roots of morality for Smith.

__________________________

 

This brings us to our final topic: the role of non-rational forms of knowledge within reason itself.

Aristotle is regarded as the founding father of logic in the West, and his writings on the subject are still influential today. Aristotle demonstrated a variety of ways to deduce correct conclusions from certain premises. Here is one example that is not from Aristotle, but which has been used as an example of Aristotle’s logic:

All men are mortal. (premise)

Socrates is a man. (premise)

Therefore, Socrates is mortal. (conclusion)

The logic is sound, and the conclusion follows from the premises. But this simple example was not at all typical of most real-life puzzles that human beings faced. And there was an additional problem.

If one believed that all knowledge had to be demonstrated through logical deduction, that rule had to be applied to the premises of the argument as well. Because if the premises were wrong, the whole argument was wrong. And every argument had to begin with at least one premise. Now one could construct another argument proving the premise(s) of the first argument — but then the premises of the new argument also had to be demonstrated, and so forth, in an infinite regress.

To get out of this infinite regress, some argued that deduced conclusions could support premises in the same way as the premises supported a conclusion, a type of circular support. But Aristotle rejected this argument as incoherent. Instead, Aristotle offered an argument that to this day is regarded as difficult to interpret.

According to Aristotle, there is another cognitive state, known as “nous.” It is difficult to find an English equivalent of this word, and the Greeks themselves seemed to use different meanings, but the word “nous” has been translated as “insight,” “intuition,” or “intelligence.” According to Aristotle, nous makes it possible to know certain things immediately without going through a process of argument or logical deduction. Aristotle compares this power to perception, noting that we have the power to discern different colors with our eyesight even without being taught what colors are. It is an ingrained type of knowledge that does not need to be taught. In other words, nous is a type of non-rational knowledge — tacit, intuitive, and direct, not requiring concepts!

Materialism: There’s Nothing Solid About It!

[I]n truth there are only atoms and the void.” – Democritus

In the ancient Greek transition from mythos to logos, stories about the world and human lives being shaped by gods and goddesses gradually came to be replaced by new explanations from philosophers. Among these philosophers were the “atomists,” including Leucippus and Democritus. Later, the Roman philosopher and poet Lucretius expounded an atomist view of the universe. The atomists were regarded as being among the first atheists and the first materialists — if they did acknowledge the existence of the gods (probably due to public pressures), they argued that the gods had no active influence on the world. Although the atomists’ understanding of the atom was primitive and far from our modern scientific understanding — they did not possess particle accelerators, after all — they were remarkably farsighted about the actual workings of nature. To this day, the symbol of the American Atheists is a depiction of the atom:

However, the ancient atomists’ conception of how the universe is constructed, with solid particles of matter combining to make complex organizational structures, has become problematic given the findings of atomic physics in the past hundred years. Increasingly, scientists have found that reality consists not of solid matter, but of organizational principles and qualities that give us the impression of solidity. And while this new view does not restore the Greek gods to prominence, it does raise questions about how we ought to understand and interpret reality.

_________________________

 

Leucippus and Democritus lived in the fifth century BC. While it is difficult to disentangle their views because of gaps in the historical record, both philosophers argued that all existence was ultimately based on tiny, indestructible particles (“atoms”) and empty space. While not explicitly denying the existence of the gods, the philosophy of Leucippus and Democritus made it clear that the gods had no significant role in the creation or maintenance of the universe. Rather, atoms existed eternally and moved randomly in empty space, until they collided and began to form larger units, leading to the growth of stars and planets and various life forms. The differences between types of matter, such as iron, water, and air were due to differences in the atoms that composed this matter. Atoms could join with each other because of a variety of hooks or sockets in the atoms that allowed for attachments.

Hundreds of years later, the Roman philosopher Lucretius expanded upon atomist theory in his poem De rerum natura (On the Nature of Things). Lucretius explained that the universe consisted of an infinite number of atoms moving and combining under the influence of laws and random chance, not the decisions of gods. Lucretius also denied the existence of an afterlife, and argued that human beings should not fear death. Although Lucretius was not explicitly atheistic, his work was perceived by Christians in the Middle Ages as being essentially atheistic in outlook and was denounced for that reason.

Not all of the ancient philosophers, even those most committed to reason, accepted the atomist view of existence. It is reported that Plato hated Democritus and wished that his books be burned. Plato did accept that there were different types of matter composing the world, but posited that the particles were perfect triangles, brought together in various combinations. In addition, these triangles were guided by a cosmic intelligence, and were not colliding randomly without purpose. For Plato, the ultimate reality was the Good, and the things we saw all around us were shadows of perfect, ideal forms that were the blueprint for the less-perfect existing things.

For two thousand years after Democritus, atomism as a worldview remained a minority viewpoint — after all, religion was still an important institution in societies, and no one had yet seen or confirmed the existence of atoms. But by the nineteenth century, advances in science had accumulated to the point at which atomism became increasingly popular as a view of reality. No longer was there a need for God or gods to explain nature and existence; atoms and laws were all that were needed. The philosophy of materialism — the view that matter is the fundamental substance in nature and that all things, including mental aspects and consciousness, are results of material interactions — became increasingly prevalent. The political-economic ideology of communism, which at one time ruled one-third of the world’s population, was rooted in materialism. In fact, Karl Marx wrote his doctoral dissertation on Democritus’ philosophy of nature, and Vladimir Lenin authored a philosophical book on materialism, including chapters on physics, that was mandatory reading in the higher education system of the Soviet Union.

As physicists conducted increasingly sophisticated experiments on the smallest parts of nature, however, certain results began to challenge the view that atoms were solid particles of matter. For one thing, it was found that atoms themselves were not solid throughout but consisted of electrons orbiting around an extremely small nucleus of protons and neutrons. The nucleus of an atom is actually 100,000 times smaller than the entire atom, even though the nucleus contains almost the entire mass of the atom. As one article has put it, “if the nucleus were the size of a peanut, the atom would be about the size of a baseball stadium.” For that reason, some have concluded that all “solid” objects in the universe, including human beings, are actually about 99.9999999 percent empty space, because of the empty space in the atoms! Others respond that in fact it is not “empty space” in the atom, but rather a “field” or “wave function” — and here it gets confusing.

In fact, subatomic particles do not have a precise location in space; they behave like a fuzzy wave until they interact with an observerand then the wave “collapses” into a particle. The bizarreness of this activity confounded the brightest scientists in the world, and to this day, there are arguments among scientists about what is “really” going on at the subatomic level.

The currently dominant interpretation of subatomic physics, known as the “Copenhagen interpretation,” was developed by the physicists Werner Heisenberg and Niels Bohr in the 1920s. Heisenberg subsequently wrote a book, Physics and Philosophy to explain how atomic physics changed our interpretation of reality. According to Heisenberg, the traditional scientific view of material objects and particles existing objectively, whether we observe them or not, could no longer be upheld. Rather than existing as solid objects, subatomic particles existed as “probability waves” — in Heisenberg’s words, “something standing in the middle between the idea of an event and the actual event, a strange kind of physical reality just in the middle between possibility and reality.” (Physics and Philosophy, p. 41 — page numbers are taken from the 1999 edition published by Prometheus books). According to Heisenberg:

The probability function does . . . not describe a certain event but, at least during the process of observation, a whole ensemble of possible events. The observation itself changes the probability function discontinuously; it selects of all possible events the actual one that has taken place. . . Therefore, the transition from the ‘possible’ to the ‘actual’ takes place during the act of observation. If we want to describe what happens in an atomic event, we have to realize that the word ‘happens’ can apply only to the observation, not to the state of affairs between two observations. It applies to the physical, not the psychical act of observation, and we may say that the transition from the ‘possible’ to the ‘actual’ takes place as soon as the interaction of the object with the measuring device, and thereby with the rest of the world, has come into play. (pp. 54-55)

Later in his book, Heisenberg writes: “If one wants to give an accurate description of the elementary particle — and here the emphasis is on the word ‘accurate’ — the only thing that can be written down as a description is a probability function.” (p. 70) Moreover,

In the experiments about atomic events we have to do with things and facts, with phenomena that are just as real as any phenomena in daily life. But the atoms or the elementary particles themselves are not as real; they form a world of potentialities or possibilities rather than one of things or facts. (p. 186)

This sounds downright crazy to most people. The idea that the solid objects of our everyday experience are made up not of smaller solid parts but of probabilities and potentialities seems bizarre. However, Heisenberg noted that observed events at the subatomic level did seem to fit the interpretation of reality given by the Greek philosopher Aristotle over 2000 years ago. According to Aristotle, reality was a combination of matter and form, but matter was not a set of solid particles but rather potential, an indefinite possibility or power that became real only when it was combined with form to make actual existing things. (pp. 147-49) To provide some rough analogies: a supply of wood can potentially be a table or a chair or a house — but it must be combined with the right form to become actually a table or a chair or a house. Likewise, a block of marble is potentially a statue of a man or a woman or an animal, but only when a sculptor shapes the marble into that particular form does the statue become actual. In other words, actuality (reality) equals potential plus form.

According to Heisenberg, Aristotle’s concept of potential was roughly equivalent to the concept of “energy” in modern physics, and “matter” was energy combined with form.

All the elementary particles are made of the same substance, which we may call energy or universal matter; they are just different forms in which the matter can appear.

If we compare this situation with the Aristotelian concepts of matter and form, we can say that the matter of Aristotle, which is mere ‘potential,’ should be compared to our concept of energy, which gets into ‘actuality’ by means of the form, when the elementary particle is created. (p. 160)

In fact, all modern physicists agree that matter is simply a form of energy (and vice versa). In the earliest stages of the universe, matter emerged out of energy, and that is how we got atoms in the first place. There is nothing inherently “solid” about energy, but energy can be transformed into particles, and particles can be transformed back into energy. According to Heisenberg, “Energy is in fact the substance from which all elementary particles, all atoms and therefore all things are made. . . .” (p. 63)

So what exactly is energy? Oddly enough, physicists have a hard time stating exactly what energy is. Energy is usually defined as the “capacity to do work” or the “capacity to cause movement,” but these definitions remain somewhat vague, and there is no specific mechanism or form that physicists can point to in order to describe energy. Gottfried Leibniz, who developed the first formula for measuring energy, referred to energy as vis viva or “living force,” a concept which is anthropomorphic and nearly theological.  In fact, there are so many different types of energy and so many different ways to measure these types of energy that many physicists are inclined to the view that energy is not a substance but just a mathematical abstraction. According to the great American physicist Richard Feynman, “It is important to realize that in physics today, we have no knowledge of what energy ‘is.’ We do not have a picture that energy comes in little blobs of a definite amount. It is not that way. It is an abstract thing in that it does not tell us the mechanism or the reason for the various formulas.” The only reason physicists know that energy exists is that they have performed numerous experiments over the years and have found that however energy is measured, the amount of energy in an isolated system always remains the same — energy can only be transformed, it can neither be created nor destroyed. Energy in itself has no form, and there is no such thing as “pure energy.” Oh, and energy is relative too — you have to specify the frame of reference when measuring energy, because the position and movement of the observer matters. For example, if you move toward a photon, its energy in that frame of reference will be greater; if you move away from a photon, its energy will be less.

In fact, the melding of relativity theory with quantum physics has further undermined materialism and our common sense notions of what it is to be “real.”  A 2013 article in Scientific American by Dr. Meinard Kuhlmann of Bielefeld University in Germany, “What is Real,” lays out some of these paradoxes of existence at the subatomic level. For example, scientists can create a vacuum in the laboratory, but when a Geiger counter is connected to the vacuum container, it will detect matter. In addition, a vacuum will contain no particles according to an observer at rest, but will contain many particles from the perspective of an accelerating observer! Kuhlmann concludes: “If the number of particles is observer-dependent, then it seems incoherent to assume that particles are basic. We can accept many features to be observer-dependent but not the fact of how many basic building blocks there are.”

So, if the smallest parts of reality are not tiny material objects, but potentialities and probabilities, which vary according to the observer, then how do we get what appears to be solid material objects, from rocks to mountains to trees to houses and cars? According to Kuhlmann, some philosophers and scientists say that we need to think about reality as consisting entirely of relations. In this view, subatomic particles have no definite position in space until they are observed because determining position in space requires a relation between an observer and observed. Position is mere potential until there is a relation. You may have heard of the old puzzle, “If a tree falls in a forest, and no one is around to hear it, does it make a sound?” The answer usually given is that sound requires a perceiver who can hear, and it makes no sense to talk about “sound” without an observer with functional ears. In the past, scientists believed that if objects were broken down into their smallest parts, we would discover the foundation of reality; but in the new view, when you break down larger objects into their smallest parts, you are gradually taking apart the relations that compose the object, until what you have left is potential. It is the relations between subatomic particles and observers that give us solidity.

Another interpretation Kuhlmann discusses is that the fundamental basis of reality is bundles of properties. In this view, reality consists not of objects or things, but of properties such as shape, mass, color, position, velocity, spin, etc. We think of things as being fundamentally real and properties as being attributes of things. But in this new view, properties are fundamentally real and “things” are what we get when properties are bundled together in certain ways. For example, we recognize a red rubber ball as being a red rubber ball because our years of experience and learning in our culture have given us the conceptual category of “red rubber ball.” An infant does not have this conceptual category, but merely sees the properties: the roundness of the shape, the color red, the elasticity of the rubber. As the infant grows up, he or she learns that this bundle of properties constitutes the “thing” known as a red rubber ball; but it is the properties that are fundamental, not the thing. So when scientists break down objects into smaller and smaller pieces in their particle accelerators, they are gradually taking apart the bundles of properties until the particles no longer even have a definite position in space!

So whether we thing of reality as consisting of relations or bundles of properties, there is nothing “solid” underlying everything.  Reality consists of properties or qualities that emerge out of potential, and then bundle together in certain ways. Over time, some bundles or relations come apart, and new bundles or relations emerge. Finally, in the evolution of life, there is an explosion of new bundles of properties, with some bundles containing a staggering degree of organizational complexity, built incrementally over millions of years. The proper interpretation of this organizational complexity will be discussed in a subsequent post.

 

What Does Science Explain? Part 5 – The Ghostly Forms of Physics

The sciences do not try to explain, they hardly even try to interpret, they mainly make models. By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of such a mathematical construct is solely and precisely that it is expected to work — that is, correctly to describe phenomena from a reasonably wide area. Furthermore, it must satisfy certain esthetic criteria — that is, in relation to how much it describes, it must be rather simple. — John von Neumann (“Method in the Physical Sciences,” in The Unity of Knowledge, 1955)

Now we come to the final part of our series of posts, “What Does Science Explain?” (If you have not already, you can peruse parts 1, 2, 3, and 4 here). As I mentioned in my previous posts, the rise of modern science was accompanied by a change in humanity’s view of metaphysics, that is, our theory of existence. Medieval metaphysics, largely influenced by ancient philosophers, saw human beings as the center or summit of creation; furthermore, medieval metaphysics proposed a sophisticated, multifaceted view of causation. Modern scientists, however, rejected much of medieval metaphysics as subjective and saw reality as consisting mainly of objects impacting or influencing each other in mathematical patterns.  (See The Metaphysical Foundations of Modern Science by E.A. Burtt.)

I have already critically examined certain aspects of the metaphysics of modern science in parts 3 and 4. For part 5, I wish to look more closely at the role of Forms in causation — what Aristotle called “formal causation.” This theory of causation was strongly influenced by Aristotle’s predecessor Plato and his Theory of Forms. What is Plato’s “Theory of Forms”? In brief, Plato argued that the world we see around us — including all people, trees, and animals, stars, planets and other objects — is not the true reality. The world and the things in it are imperfect and perishable realizations of perfect forms that are eternal, and that continually give birth to the things we see. That is, forms are the eternal blueprints of perfection which the material world imperfectly represents. True philosophers do not focus on the material world as it is, but on the forms that material things imperfectly reflect. In order to judge a sculpture, painting, or natural setting, a person must have an inner sense of beauty. In order to evaluate the health of a particular human body, a doctor must have an idea of what a perfectly healthy human form is. In order to evaluate a government’s system of justice, a citizen must have an idea about what perfect justice would look like. In order to critically judge leaders, citizens must have a notion of the virtues that such a leader should have, such as wisdom, honesty, and courage.  Ultimately, according to Plato, a wise human being must learn and know the perfect forms behind the imperfect things we see: we must know the Form of Beauty, the Form of Justice, the Form of Wisdom, and the ultimate form, the Form of Goodness, from which all other forms flow.

Unsurprisingly, many intelligent people in the modern world regard Plato’s Theory of Forms as dubious or even outrageous. Modern science teaches us that sure knowledge can only be obtained by observation and testing of real things, but Plato tells us that our senses are deceptive, that the true reality is hidden behind what we sense. How can we possibly confirm that the forms are real? Even Plato’s student Aristotle had problems with the Theory of Forms and argued that while the forms were real, they did not really exist until they were manifested in material things.

However, there is one important sense in which modern science retained the notion of formal causation, and that is in mathematics. In other words, most scientists have rejected Plato’s Theory of Forms in all aspects except for Plato’s view of mathematics. “Mathematical Platonism,” as it is called, is the idea that mathematical forms are objectively real and are part of the intrinsic order of the universe. However, there are also sharp disagreements on this subject, with some mathematicians and scientists arguing that mathematical forms are actually creations of the human imagination.

The chief difference between Plato and modern scientists on the study of mathematics is this: According to Plato, the objects of geometry — perfect squares, perfect circles, perfect planes — existed nowhere in the material world; we only see imperfect realizations. But the truly wise studied the perfect, eternal forms of geometry rather than their imperfect realizations. Therefore, while astronomical observations indicated that planetary bodies orbited in imperfect circles, with some irregularities and errors, Plato argued that philosophers must study the perfect forms instead of the actual orbits! (The Republic, XXVI, 524D-530C) Modern science, on the other hand, is committed to observation and study of real orbits as well as the study of perfect mathematical forms.

Is it tenable to hold the belief that Plato and Aristotle’s view of eternal forms is mostly subjective nonsense, but they were absolutely right about mathematical forms being real? I argue that this selective borrowing of the ancient Greeks doesn’t quite work, that some of the questions and difficulties with proving the reality of Platonic forms also afflicts mathematical forms.

The main argument for mathematical Platonism is that mathematics is absolutely necessary for science: mathematics is the basis for the most important and valuable physical laws (which are usually in the form of equations), and everyone who accepts science must agree that the laws of nature or the laws of physics exist. However, the counterargument to this claim is that while mathematics is necessary for human beings to conduct science and understand reality, that does not mean that mathematical objects or even the laws of nature exist objectively, that is, outside of human minds.

I have discussed some of the mysterious qualities of the “laws of nature” in previous posts (here and here). It is worth pointing out that there remains a serious debate among philosophers as to whether the laws of nature are (a) descriptions of causal regularities which help us to predict or (b) causal forces in themselves. This is an important distinction that most people, including scientists, don’t notice, although the theoretical consequences are enormous. Physicist Kip Thorne writes that laws “force the Universe to behave the way it does.” But if laws have that kind of power, they must be ubiquitous (exist everywhere), eternal (exist prior to the universe), and have enormous powers although they have no detectable energy or mass — in other words, the laws of nature constitute some kind of supernatural spirit. On the other hand, if laws are summary descriptions of causation, these difficulties can be avoided — but then the issue arises: do the laws of nature or of physics really exist objectively, outside of human minds, or are they simply human-constructed statements about patterns of causation? There are good reasons to believe the latter is true.

The first thing that needs to be said is that nearly all these so-called laws of nature are actually approximations of what really happens in nature, approximations that work only under certain restrictive conditions. Both of these considerations must be taken into account, because even the approximations fall apart outside of certain pre-specified conditions. Newton’s law of universal gravitation, for example, is not really universal. It becomes increasingly inaccurate under conditions of high gravity and very high velocities, and at the atomic level, gravity is completely swamped by other forces. Whether one uses Newton’s law depends on the specific conditions and the level of accuracy one requires. Kepler’s laws of planetary motion are an approximation based on the simplifying assumption of a planetary system consisting of one planet. The ideal gas law is an approximation which becomes inaccurate under conditions of low temperature and/or high pressure. The law of multiple proportions works for simple molecular compounds, but often fails for complex molecular compounds. Biologists have discovered so many exceptions to Mendel’s laws of genetics that some believe that Mendel’s laws should not even be considered laws.

The fact of the matter is that even with the best laws that science has come up with, we still can’t predict the motions of more than two interacting astronomical bodies without making unrealistic simplifying assumptions. Michael Scriven, a mathematician and philosopher at Claremont Graduate University, has concluded that the laws of nature or physics are actually cobbled together by scientists based on multiple criteria:

Briefly we may say that typical physical laws express a relationship between quantities or a property of systems which is the simplest useful approximation to the true physical behavior and which appears to be theoretically tractable. “Simplest” is vague in many cases, but clear for the extreme cases which provide its only use. “Useful” is a function of accuracy and range and purpose. (Michael Scriven, “The Key Property of Physical Laws — Inaccuracy,” in Current Issues in the Philosophy of Science, ed. Herbert Feigl)

The response to this argument is that it doesn’t disprove the objective existence of physical laws — it simply means that the laws that scientists come up with are approximations to real, objectively existing underlying laws. But if that is the case, why don’t scientists simply state what the true laws are? Because the “laws” would actually end up being extremely long and complex statements of causation, with so many conditions and exceptions that they would not really be considered laws.

An additional counterargument to mathematical Platonism is that while mathematics is necessary for science, it is not necessary for the universe. This is another important distinction that many people overlook. Understanding how things work often requires mathematics, but that doesn’t mean the things in themselves require mathematics. The study of geometry has given us pi and the Pythagorean theorem, but a child does not need to know these things in order to draw a circle or a right triangle. Circles and right triangles can exist without anyone, including the universe, knowing the value of pi or the Pythagorean theorem. Calculus was invented in order to understand change and acceleration; but an asteroid, a bird, or a cheetah is perfectly capable of changing direction or accelerating without needing to know calculus.

Even among mathematicians and scientists, there is a significant minority who have argued that mathematical objects are actually creations of the human imagination, that math may be used to model aspects of reality, but it does not necessarily do so. Mathematicians Philip J. Davis and Reuben Hersh argue that mathematics is the study of “true facts about imaginary objects.” Derek Abbot, a professor of engineering, writes that engineers tend to reject mathematical Platonism: “the engineer is well acquainted with the art of approximation. An engineer is trained to be aware of the frailty of each model and its limits when it breaks down. . . . An engineer . . . has no difficulty in seeing that there is no such a thing as a perfect circle anywhere in the physical universe, and thus pi is merely a useful mental construct.” (“The Reasonable Ineffectiveness of Mathematics“) Einstein himself, making a distinction between mathematical objects used as models and pure mathematics, wrote that “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” Hartry Field, a philosopher at New York University, has argued that mathematics is a useful fiction that may not even be necessary for science. Field goes to show that it is possible to reconstruct Newton’s theory of gravity without using mathematics. (There is more discussion on this subject here and here.)

So what can we conclude about the existence of forms? I have to admit that although I’m skeptical, I have no sure conclusions. It seems unlikely that forms exist outside the mind . . . but I can’t prove they don’t exist either. Forms do seem to be necessary for human reasoning — no thinking human can do without them. And forms seem to be rooted in reality: perfect circles, perfect squares, and perfect human forms can be thought of as imaginative projections of things we see, unlike Sherlock Holmes or fire-breathing dragons or flying spaghetti monsters, which are more creatively fictitious. Perhaps one could reconcile these opposing views on forms by positing that the human mind and imagination is part of the universe itself, and that the universe is becoming increasingly consciously aware.

Another way to think about this issue was offered by Robert Pirsig in Zen and the Art of Motorcycle Maintenance. According to Pirsig, Plato made a mistake by positing Goodness as a form. Even considered as the highest form, Goodness (or “Quality,” in Pirsig’s terminology) can’t really be thought of as a static thing floating around in space or some otherworldly realm. Forms are conceptual creations of humans who are responding to Goodness (Quality). Goodness itself is not a form, because it is not an unchanging thing — it is not static or even definable. It is “reality itself, ever changing, ultimately unknowable in any kind of fixed, rigid way.” (p. 342) Once we let go of the idea that Goodness or Quality is a form, we can realize that not only is Goodness part of reality, it is reality.

As conceptual creations, ideal forms are found in both science and religion. So why, then, does there seem to be such a sharp split between science and religion as modes of knowledge? I think it comes down to this: science creates ideal forms in order to model and predict physical phenomena, while religion creates ideal forms in order to provide guidance on how we should live.

Scientists like to see how things work — they study the parts in order to understand how the wholes work. To increase their understanding, scientists may break down certain parts into smaller parts, and those parts into even smaller parts, until they come to the most fundamental, indivisible parts. Mathematics has been extremely useful in modeling and understanding these parts of nature, so scientists create and appreciate mathematical forms.

Religion, on the other hand, tends to focus on larger wholes. The imaginative element of religion envisions perfect states of being, whether it be the Garden of Eden or the Kingdom of Heaven, as well as perfect (or near perfect) humans who serve as prophets or guides to a better life. Religion is less concerned with how things work than with how things ought to work, how things ought to be. So religion will tend to focus on subjects not covered by science, including the nature and meaning of beauty, love, and justice. There will always be debates about the appropriateness of particular forms in particular circumstances, but the use of forms in both science and religion is essential to understanding the universe and our place in it.

What Does Science Explain? Part 2 – The Metaphysics of Modern Science

In my previous post, I discussed the nature of metaphysics, a theory of being and existence, in the medieval world. The metaphysics of the medieval period was strongly influenced by the ancient Greeks, particularly Aristotle, who posited four causes or explanations for why things were. In addition, Aristotle argued that existence could be understood as the result of a transition from “potentiality” to “actuality.” With the rise of modern science, argued Edwin Arthur Burtt in The Metaphysical Foundations of Modern Science, the medieval conception of existence changed. Although some of this change was beneficial, argued Burtt, there was also a loss.

The first major change that modern science brought about was the strict separation of human beings, along with human senses and desires, from the “real” universe of impersonal objects joining, separating, and colliding with each other. Rather than seeing human beings as the center or summit of creation, as the medievals did, modern scientists removed the privileged position of human beings and promoted the goal of “objectivity” in their studies, arguing that we needed to dismiss all subjective human sensations and look at objects as they were in themselves. Kepler, Galileo, and Newton made a sharp distinction between the “primary qualities” of objects and “secondary qualities,” arguing that only primary qualities were truly real, and therefore worth studying. What were the “primary qualities?”: quantity/mathematics, motion, shape, and solidity. These qualities existed within objects and were independent of human perception and sensation. The “secondary qualities” were color, taste, smell, and sound; these were subjective because they were derived from human sensations, and therefore did not provide objective facts that could advance knowledge.

The second major change that modern science brought to metaphysics was a dismissal of the medieval world’s rich and multifaceted concept of causation in favor of a focus on “efficient causation” (the impact of one object or event on another). The concept of “final causation,” that is, goal-oriented development, was neglected. In addition, the concept of “formal causation,” that is, the emergence of things out of universal forms, was reduced to mathematics; only mathematical forms expressed in the “laws of nature,” were truly real, according to the new scientific worldview. Thus, all causation was reduced to mathematical “laws of nature” directing the motion and interaction of objects.

The consequences of this new worldview were tremendous in terms of altering humanity’s conception of reality and what it meant to explain reality. According to Burtt, “From now on, it is a settled assumption for modern thought in practically every field, that to explain anything is to reduce it to its elementary parts, whose relations, where temporal in character, are conceived in terms of efficient causality solely.” (Metaphysics of Modern Science, p. 134) And although the early giants of science — Kepler, Galileo, and Newton — believed in God, their conception of God was significantly different from the medieval view. Rather than seeing God as the Supreme Good, the goal or end which continually brought all things from potentiality to actuality, they saw God in terms of the “First Efficient Cause” only. That is, God brought the laws of nature into existence, and then the universe operated like a clock or machine, which might then only occasionally need rewinding or maintenance. But once this conception of God became widespread, it was not long before people questioned whether God was necessary at all to explain the universe.

Inarguably, there were great advantages to the metaphysical views of early scientists. By focusing on mathematical models and efficient causes, while pruning away many of the non-calculable qualities of natural phenomena, scientists were able to develop excellent predictive models. Descartes gave up the study of “final causes” and focused his energies on mathematics because he felt no one could discern God’s purposes, a view adopted widely by subsequent scientists. Both Galileo and Newton put great emphasis on the importance of observation and experimentation in the study of nature, which in many cases put an end to abstract philosophical speculations on natural phenomena that gave no definite conclusions. And Newton gave precise meanings to previously vague terms like “force” and “mass,” meanings that allowed measurement and calculation.

The mistake that these early scientists made, however, was to elevate a method into a metaphysics, by proclaiming that what they studied was the only true reality, with all else existing solely in the human mind. According to Burtt,

[T]he great Newton’s authority was squarely behind that view of the cosmos which saw in man a puny, irrelevant spectator . . . of the vast mathematical system whose regular motions according to mechanical principles constituted the world of nature. . . . The world that people had thought themselves living in — a world rich with colour and sound, redolent with fragrance, filled with gladness, love and beauty, speaking everywhere of purposive harmony and creative ideals — was crowded now into minute corners in the brains of scattered organic beings. The really important world outside was a world hard, cold, colourless, silent, and dead; a world of quantity, a world of mathematically computable motions in mechanical regularity.  (pp. 238-9)

Even at the time this new scientific metaphysics was being developed, it was critiqued on various grounds by philosophers such as Leibniz, Hume, and Berkeley. These philosophers’ critiques had little long-term impact, probably because scientists offered working predictive models and philosophers did not. But today, even as science is promising an eventual “theory of everything,” the limitations of the metaphysics of modern science is causing even some scientists to rethink the whole issue of causation and the role of human sensations in developing knowledge. The necessity for rethinking the modern scientific view of metaphysics will be the subject of my next post.

What Does Science Explain? Part 1 – What is Causation?

In previous posts, I have argued that science has been excellent at creating predictive models of natural phenomena. From the origins of the universe, to the evolution of life, to chemical reactions, and the building of technological devices, scientists have learned to predict causal sequences and manipulate these causal sequences for the benefit (or occasionally, detriment) of humankind. These models have been stupendous achievements of civilization, and religious texts and institutions simply cannot compete in terms of offering predictive models.

There remains the issue, however, of whether the predictive models of science really explain all that there is to explain. While many are inclined to believe that the models of science explain everything, or at least everything that one needs to know, there are actually some serious disputes even among scientists about what causation is, what a valid explanation is, whether predictive models need to be realistic, and how real are some of the entities scientists study, such as the “laws of nature” and the mathematics that are often part of those laws.

The fundamental issues of causation, explanation, and reality are discussed in detail in a book published in 1954 entitled: The Metaphysical Foundations of Modern Science by Edwin Arthur Burtt. According to Burtt, the birth and growth of modern science came with the development of a new metaphysics, that is, the study of being and existence. Copernicus, Kepler, Galileo, and Newton all played a role in creating this new metaphysics, and it shapes how we view the world to this day.

In order to understand Burtt’s thesis, we need to back up a bit and briefly discuss the state of metaphysics before modern science — that is, medieval metaphysics. The medieval view of the world in the West was based largely on Christianity and the ancient Greek philosophers such as Aristotle, who wrote treatises on both physics and metaphysics.

Aristotle wrote that there were four types of answers to the question “why?” These answers were described by Aristotle as the “four causes,” though it has been argued that the correct translation of the Greek word that Aristotle used is “explanation” rather than “cause.” These are:

(1) Material cause

(2) Formal cause

(3) Efficient (or moving) cause

(4) Final cause

“Material cause” refers to changes that take place as a result of the material that something is made of. If a substance melts at a particular temperature, one can argue that it is the material nature of that substance that causes it to melt at that temperature. (The problem with this kind of explanation is that it is not very deep — one can then ask why a material behaves as it does.)

“Formal cause” refers to the changes that take place in matter because of the form that an object is destined to have. According to Aristotle, all objects share the same matter — it is the arrangement of matter into their proper forms that causes matter to become a rock, a tree, a bird, or a human being. Objects and living things eventually disintegrate and perish, but the forms are eternal, and they shape matter into new objects and living things that replace the old. The idea of formal causation is rooted in Plato’s theory of forms, though Aristotle modified Plato’s theory in a number of ways.

“Efficient cause” refers to the change that takes place when one object impacts another; one object or event is the cause, the other is the effect. A stick hitting a ball, a saw cutting wood, and hydrogen atoms interacting with oxygen atoms to create water are all examples of efficient causes.

“Final cause” refers to the goal, end, or purpose of a thing — the Greek word for goal is “telos.” An acorn grows into an oak tree because that is the goal or telos of an acorn. Likewise, a fertilized human ovum becomes a human being. In nature, birds fly, rain nourishes plants, and the moon orbits the earth, because nature has intended certain ends for certain things. The concept of a “final cause” is intimately related to the “formal cause,” in the sense that the forms tend to provide the ends that matter pursues.

Related to these four causes or explanations is Aristotle’s notion of potentiality and actuality. Before things come into existence, one can say that there is potential; when these things come into existence they are actualized. Hydrogen atoms and oxygen atoms have the potential to become water if they are joined in the right way, but until they are so joined, there is only potential water, not actual water. A block of marble has the potential to become a statue, but it is not actually a statue until a sculptor completes his or her work. A human being is potentially wise if he or she pursues knowledge, but until that pursuit of knowledge is carried out, there is only potentiality and not actuality. The forms and telos of nature are primarily responsible for the transformation of potentiality into actuality.

Two other aspects of the medieval view of metaphysics are worth noting. First, for the medievals, human beings were the center of the universe, the highest end of nature. Stars, planets, trees, animals, chemicals, were lower forms of being than humans and existed for the benefit of humans. Second, God was not merely the first cause of the universe — God was the Supreme Good, the goal or telos to which all creation was drawn in pursuit of its final goals and perfection. According to Burtt,

When medieval philosophers thought of what we call the temporal process it was this continuous transformation of potentiality into actuality that they had in mind. . . . God was the One who eternally exists, and ever draws into movement by his perfect beauty all that is potentially the bearer of a higher existence. He is the divine harmony of all goods, conceived as now realized in ideal activity, eternally present, himself unmoved, yet the mover of all change. (Burtt, The Metaphysical Foundations of Modern Science, pp. 94-5)

The rise of modern science, according to Burtt, led to a radical change in humanity’s metaphysical views. A great deal of this change was beneficial, in the sense that it led to predictive models that successfully answered certain questions about natural processes that were previously mysterious. However, as Burtt noted, the new metaphysics of science was also a straitjacket that constricted humanity’s pursuit of knowledge. Some human senses were unjustifiably dismissed as unreliable or deceptive and some types of causation were swept away unnecessarily. How modern science created a new metaphysics that changed humanity’s conception of reality will be discussed in part two.