What Does Science Explain? Part 3 – The Mythos of Objectivity

In parts one and two of my series “What Does Science Explain?,” I contrasted the metaphysics of the medieval world with the metaphysics of modern science. The metaphysics of modern science, developed by Kepler, Galileo, Descartes, and Newton, asserted that the only true reality was mathematics and the shape, motion, and solidity of objects, all else being subjective sensations existing solely within the human mind. I pointed out that the new scientific view was valuable in developing excellent predictive models, but that scientists made a mistake in elevating a method into a metaphysics, and that the limitations of the metaphysics of modern science called for a rethinking of the modern scientific worldview. (See The Metaphysical Foundations of Modern Science by Edwin Arthur Burtt.)

Early scientists rejected the medieval worldview that saw human beings as the center and summit of creation, and this rejection was correct with regard to astronomical observations of the position and movement of the earth. But the complete rejection of medieval metaphysics with regard to the role of humanity in the universe led to a strange division between theory and practice in science that endures to this day. The value and prestige of science rests in good part on its technological achievements in improving human life. But technology has a two-sided nature, a destructive side as well as a creative side. Aspects of this destructive side include automatic weaponry, missiles, conventional explosives, nuclear weapons, biological weapons, dangerous methods of climate engineering, perhaps even a threat from artificial intelligence. Even granting the necessity of the tools of violence for deterrence and self-defense, there remains the question of whether this destructive technology is going too far and slipping out of our control. So far the benefits of good technology have outweighed the hazards of destructive technology, but what research guidance is offered to scientists when human beings are removed from their high place in the universe and human values are separated from the “real” world of impersonal objects?

Consider the following question: Why do medical scientists focus their research on the treatment and cure of illness in humans rather than the treatment and cure of illness in cockroaches or lizards? This may seem like a silly question, but there’s no purely objective, scientific reason to prefer one course of research over another; the metaphysics of modern science has already disregarded the medieval view that humans have a privileged status in the universe. One could respond by arguing that human beings have a common self-interest in advancing human health through medical research, and this self-interest is enough. But what is the scientific justification for the pursuit of self-interest, which is not objective anyway? Without a recognition of the superior value of human life, medical science has no research guidance.

Or consider this: right now, astronomers are developing and employing advanced technologies to detect other worlds in the galaxy that may have life. The question of life on other planets has long interested astronomers, but it was impossible with older technologies to adequately search for life. It would be safe to say that the discovery of life on another planet would be a landmark development in science, and the discovery of intelligent life on another planet would be an astonishing development. The first scientist who discovered a world with intelligent life would surely win awards and fame. And yet, we already have intelligent life on earth and the metaphysics of modern science devalues it. In practice, of course, most scientists do value human life; the point is, the metaphysics behind science doesn’t, leaving scientists at a loss for providing an intellectual justification for a research program that protects and advances human life.

A second limitation of modern science’s metaphysics, closely related to the first, is its disregard of certain human sensations in acquiring knowledge. Early scientists promoted the view that only the “primary qualities” of mathematics, shape, size, and motion were real, while the “secondary qualities” of color, taste, smell, and sound existed only in the mind. This distinction between primary and secondary qualities was criticized at the time by philosophers such as George Berkeley, a bishop of the Anglican Church. Berkeley argued that the distinction between primary and secondary qualities was false and that even size, shape, and motion were relative to the perceptions and judgment of observers. Berkeley also opposed Isaac Newton’s theory that space and time were absolute entities, arguing instead that these were ideas rooted in human sensations. But Berkeley was disregarded by scientists, largely because Newton offered predictive models of great value.

Three hundred years later, Isaac Newton’s models retain their great value and are still widely used — but it is worth noting that Berkeley’s metaphysics has actually proved superior in many respects to Newton’s metaphysics.

Consider the nature of mathematics. For many centuries mathematicians believed that mathematical objects were objectively real and certain and that Euclidean geometry was the one true geometry. However, the discovery of non-Euclidean geometries in the nineteenth century shook this assumption, and mathematicians had to reconcile themselves to the fact that it was possible to create multiple geometries of equal validity. There were differences between the geometries in terms of their simplicity and their ability to solve particular problems, but no one geometry was more “real” than the others.

If you think about it, this should not be surprising. The basic objects of geometry — points, lines, and planes — aren’t floating around in space waiting for you to take note of them. They are concepts, creations of the human brain. We may see particular objects that resemble points, lines, and planes, but space itself has no visible content; we have to add content to it.  And we have a choice in what content to use. It is possible to create a geometry in which all lines are straight or all lines are curved; in which some lines are parallel or no lines are parallel;  or in which lines are parallel over a finite distance but eventually meet at some infinitely great distance. It is also possible to create a geometry with axioms that assume no lines, only points; or a geometry that assumes “regions” rather than points. So the notion that mathematics is a “primary quality” that exists within objects independent of human minds is a myth. (For more on the imaginary qualities of mathematics, see my previous posts here and here.)

But aside from the discovery of multiple mathematical systems, what has really killed the artificial distinction between “primary qualities,” allegedly objective, and “secondary qualities,” allegedly subjective, is modern science itself, particularly in the findings of relativity theory and quantum mechanics.

According to relativity theory, there is no single, objectively real size, shape, or motion of objects — these qualities are all relative to an observer in a particular reference frame (say, at the same location on earth, in the same vehicle, or in the same rocket ship). Contrary to some excessive and simplistic views, relativity theory does NOT mean that any and all opinions are equally valid. In fact, all observers within the same reference frame should be seeing the same thing and their measurements should match. But observers in different reference frames may have radically different measurements of the size, shape, and motion of an object, and there is no one single reference frame that is privileged — they are all equally valid.

Consider the question of motion. How fast are you moving right now? Relative to your computer or chair, you are probably still. But the earth is rotating at 1040 miles per hour, so relative to an observer on the moon, you would be moving at that speed — adjusting for the fact that the moon is also orbiting around the earth at 2288 miles per hour. But also note that the earth is orbiting the sun at 66,000 miles per hour, our solar system is orbiting the galaxy at 52,000 miles per hour, and our galaxy is moving at 1,200,000 miles per hour; so from the standpoint of an observer in another galaxy you are moving at a fantastically fast speed in a series of crazy looping motions. Isaac Newton argued that there was an absolute position in space by which your true, objective speed could be measured. But Einstein dismissed that view, and the scientific consensus today is that Einstein was right — the answer to the question of how fast you are moving is relative to the location and speed of the observer.

The relativity of motion was anticipated by the aforementioned George Berkeley as early as the eighteenth century, in his Treatise Concerning the Principles of Human Knowledge (paragraphs 112-16). Berkeley’s work was subsequently read by the physicist Ernest Mach, who subsequently influenced Einstein.

Relativity theory also tells us that there is no absolute size and shape, that these also vary according to the frame of reference of an observer in relation to what is observed. An object moving at very fast speeds relative to an observer will be shortened in length, which also affects its shape. (See the examples here and here.) What is the “real” size and shape of the object? There is none — you have to specify the reference frame in order to get an answer. Professor Richard Wolfson, a physicist at Middlebury College who has a great lecture series on relativity theory, explains what happens at very fast speeds:

An example in which length contraction is important is the Stanford Linear Accelerator, which is 2 miles long as measured on Earth, but only about 3 feet long to the electrons moving down the accelerator at 0.9999995c [nearly the speed of light]. . . . [Is] the length of the Stanford Linear Accelerator ‘really’ 2 miles? No! To claim so is to give special status to one frame of reference, and that is precisely what relativity precludes. (Course Guidebook to Einstein’s Relativity and the Quantum Revolution, Lecture 10.)

In fact, from the perspective of a light particle (a photon), there is infinite length contraction — there is no distance and the entire universe looks like a point!

The final nail in the coffin of the metaphysics of modern science is surely the weird world of quantum physics. According to quantum physics, particles at the subatomic level do not occupy only one position at a particular moment of time but can exist in multiple positions at the same time — only when the subatomic particles are observed do the various possibilities “collapse” into a single outcome. This oddity led to the paradoxical thought experiment known as “Schrodinger’s Cat” (video here). The importance of the “observer effect” to modern physics is so great that some physicists, such as the late physicist John Wheeler, believed that human observation actually plays a role in shaping the very reality of the universe! Stephen Hawking holds a similar view, arguing that our observation “collapses” multiple possibilities into a single history of the universe: “We create history by our observation, rather than history creating us.” (See The Grand Design, pp. 82-83, 139-41.) There are serious disputes among scientists about whether uncertainties at the subatomic level really justify the multiverse theories of Wheeler and Hawking, but that is another story.

Nevertheless, despite the obsolescence of the metaphysical premises of modern science, when scientists talk about the methods of science, they still distinguish between the reality of objects and the unreality of what exists in the mind, and emphasize the importance of being objective at all times. Why is that? Why do scientists still use a metaphysics developed centuries ago by Kepler, Galileo, and Newton? I think this practice persists largely because the growth of knowledge since these early thinkers has led to overspecialization — if one is interested in science, one pursues a degree in chemistry, biology, or physics; if one is interested in metaphysics, one pursues a degree in philosophy. Scientists generally aren’t interested in or can’t understand what philosophers have to say, and philosophers have the same view of scientists. So science carries on with a metaphysics that is hundreds of years old and obsolete.

It’s true that the idea of objectivity was developed in response to the very real problem of the uncertainty of human sense impressions and the fallibility of the conclusions our minds draw in response to those sense impressions. Sometimes we think we see something, but we don’t. People make mistakes, they may see mirages; in extreme cases, they may hallucinate. Or we see the same thing but have different interpretations. Early scientists tried to solve this problem by separating human senses and the human mind from the “real” world of objects. But this view was philosophically dubious to begin with and has been refuted by science itself. So how do we resolve the problem of mistaken and differing perceptions and interpretations?

Well, we supplement our limited senses and minds with the senses and minds of other human beings. We gather together, we learn what others have perceived and concluded, we engage in dialogue and debate, we conduct repeated observations and check our results with the results of others. If we come to an agreement, then we have a tentative conclusion; if we don’t agree, more observation, testing, and dialogue is required to develop a picture that resolves the competing claims. In some cases we may simply end up with an explanation that accounts for why we come up with different conclusions — perhaps we are in different locations, moving at different speeds, or there is something about our sensory apparatus that causes us to sense differently. (There is an extensive literature in science about why people see colors differently due to the nature of the eye and brain.)

Central to the whole process of science is a common effort — but there is also the necessity of subduing one’s ego, acknowledging that not only are there other people smarter than we are, but that the collective efforts of even less-smart people are greater than our own individual efforts. Subduing one’s ego is also required in order to prepare for the necessity of changing one’s mind in response to new evidence and arguments. Ultimately, the search for knowledge is a social and moral enterprise. But we are not going to succeed in that endeavor by positing a reality separate from human beings and composed only of objects. (Next: Part 4)

What Does Science Explain? Part 2 – The Metaphysics of Modern Science

In my previous post, I discussed the nature of metaphysics, a theory of being and existence, in the medieval world. The metaphysics of the medieval period was strongly influenced by the ancient Greeks, particularly Aristotle, who posited four causes or explanations for why things were. In addition, Aristotle argued that existence could be understood as the result of a transition from “potentiality” to “actuality.” With the rise of modern science, argued Edwin Arthur Burtt in The Metaphysical Foundations of Modern Science, the medieval conception of existence changed. Although some of this change was beneficial, argued Burtt, there was also a loss.

The first major change that modern science brought about was the strict separation of human beings, along with human senses and desires, from the “real” universe of impersonal objects joining, separating, and colliding with each other. Rather than seeing human beings as the center or summit of creation, as the medievals did, modern scientists removed the privileged position of human beings and promoted the goal of “objectivity” in their studies, arguing that we needed to dismiss all subjective human sensations and look at objects as they were in themselves. Kepler, Galileo, and Newton made a sharp distinction between the “primary qualities” of objects and “secondary qualities,” arguing that only primary qualities were truly real, and therefore worth studying. What were the “primary qualities?”: quantity/mathematics, motion, shape, and solidity. These qualities existed within objects and were independent of human perception and sensation. The “secondary qualities” were color, taste, smell, and sound; these were subjective because they were derived from human sensations, and therefore did not provide objective facts that could advance knowledge.

The second major change that modern science brought to metaphysics was a dismissal of the medieval world’s rich and multifaceted concept of causation in favor of a focus on “efficient causation” (the impact of one object or event on another). The concept of “final causation,” that is, goal-oriented development, was neglected. In addition, the concept of “formal causation,” that is, the emergence of things out of universal forms, was reduced to mathematics; only mathematical forms expressed in the “laws of nature,” were truly real, according to the new scientific worldview. Thus, all causation was reduced to mathematical “laws of nature” directing the motion and interaction of objects.

The consequences of this new worldview were tremendous in terms of altering humanity’s conception of reality and what it meant to explain reality. According to Burtt, “From now on, it is a settled assumption for modern thought in practically every field, that to explain anything is to reduce it to its elementary parts, whose relations, where temporal in character, are conceived in terms of efficient causality solely.” (Metaphysics of Modern Science, p. 134) And although the early giants of science — Kepler, Galileo, and Newton — believed in God, their conception of God was significantly different from the medieval view. Rather than seeing God as the Supreme Good, the goal or end which continually brought all things from potentiality to actuality, they saw God in terms of the “First Efficient Cause” only. That is, God brought the laws of nature into existence, and then the universe operated like a clock or machine, which might then only occasionally need rewinding or maintenance. But once this conception of God became widespread, it was not long before people questioned whether God was necessary at all to explain the universe.

Inarguably, there were great advantages to the metaphysical views of early scientists. By focusing on mathematical models and efficient causes, while pruning away many of the non-calculable qualities of natural phenomena, scientists were able to develop excellent predictive models. Descartes gave up the study of “final causes” and focused his energies on mathematics because he felt no one could discern God’s purposes, a view adopted widely by subsequent scientists. Both Galileo and Newton put great emphasis on the importance of observation and experimentation in the study of nature, which in many cases put an end to abstract philosophical speculations on natural phenomena that gave no definite conclusions. And Newton gave precise meanings to previously vague terms like “force” and “mass,” meanings that allowed measurement and calculation.

The mistake that these early scientists made, however, was to elevate a method into a metaphysics, by proclaiming that what they studied was the only true reality, with all else existing solely in the human mind. According to Burtt,

[T]he great Newton’s authority was squarely behind that view of the cosmos which saw in man a puny, irrelevant spectator . . . of the vast mathematical system whose regular motions according to mechanical principles constituted the world of nature. . . . The world that people had thought themselves living in — a world rich with colour and sound, redolent with fragrance, filled with gladness, love and beauty, speaking everywhere of purposive harmony and creative ideals — was crowded now into minute corners in the brains of scattered organic beings. The really important world outside was a world hard, cold, colourless, silent, and dead; a world of quantity, a world of mathematically computable motions in mechanical regularity.  (pp. 238-9)

Even at the time this new scientific metaphysics was being developed, it was critiqued on various grounds by philosophers such as Leibniz, Hume, and Berkeley. These philosophers’ critiques had little long-term impact, probably because scientists offered working predictive models and philosophers did not. But today, even as science is promising an eventual “theory of everything,” the limitations of the metaphysics of modern science is causing even some scientists to rethink the whole issue of causation and the role of human sensations in developing knowledge. The necessity for rethinking the modern scientific view of metaphysics will be the subject of my next post.

The Use of Fiction and Falsehood in Science

Astrophysicist Neil deGrasse Tyson has some interesting and provocative things to say about religion in a recent interview. I tend to agree with Tyson that religions have a number of odd or even absurd beliefs that are contrary to science and reason. One statement by Tyson, however, struck me as inaccurate. According to Tyson, “[T]here are religions and belief systems, and objective truths. And if we’re going to govern a country, we need to base that governance on objective truths — not your personal belief system.” (The Daily Beast)

I have a great deal of respect for Tyson as a scientist, and Tyson clearly knows more about physics than I do. But I think his understanding of what scientific knowledge provides is naïve and unsupported by history and present day practice. The fact of the matter is that scientists also have belief systems, “mental models” of how the world works. These mental models are often excellent at making predictions, and may also be good for explanation. But the mental models of science may not be “objectively true” in representing reality.

The best mental models in science satisfy several criteria: they reliably predict natural phenomena; they cover a wide range of such phenomena (i.e., they cover much more than a handful of special cases); and they are relatively simple. Now it is not easy to create a mental model that satisfies these criteria, especially because there are tradeoffs between the different criteria. As a result, even the best scientists struggle for many years to create adequate models. But as descriptions of reality, the models, or components of the models, may be fictional or even false. Moreover, although we think that the models we have today are true, every good scientist knows that in the future our current models may be completely overturned by new models based on entirely new conceptions. Yet in many cases, scientists often respect or retain the older models because they are useful, even if the models’ match to reality is false!

Consider the differences between Isaac Newton’s conception of gravity and Albert Einstein’s conception of gravity. According to Newton, gravity is a force that attracts objects to each other. If you throw a ball on earth, the path of the ball eventually curves downward because of the gravitational attraction of the earth. In Newton’s view, planets orbit the sun because the force of gravity pulls planetary bodies away from the straight line paths that they would normally follow as a result of inertia: hence, planets move in circular orbits. But according to Einstein, gravity is not a force — gravity seems like it’s a force, but it’s actually a “fictitious force.” In Einstein’s view, objects seem to attract each other because mass warps or curves spacetime, and objects tend to follow the paths made by curved spacetime. Newton and Einstein agree that inertia causes objects in motion to continue in straight lines unless they are acted on by a force; but in Einstein’s view, planets orbit the sun because they are actually already travelling straight paths, only in curved spacetime! (Yes this makes sense — if you travel in a jet, your straightest possible path between two cities is actually curved, because the earth is round.)

Scientists agree that Einstein’s view of gravity is correct (for now). But they also continue to use Newtonian models all the time. Why? Because Newtonian models are much simpler than Einstein’s and scientists don’t want to work harder than they have to! Using Newtonian conceptions of gravity as a real force, scientists can still track the paths of objects and send satellites into orbit; Newton’s equations work perfectly fine as predictive models in most cases. It is only in extraordinary cases of very high gravity or very high speeds that scientists must abandon Newtonian models and use Einstein’s to get more accurate predictions. Otherwise scientists much prefer to assume gravity is a real force and use Newtonian models. Other fictitious forces that scientists calculate using Newton’s models are the Coriolis force and centrifugal force.

Even in cases where you might expect scientists to use Einstein’s conception of curved spacetime, there is not a consistent practice. Sometimes scientists assume that spacetime is curved, sometimes they assume spacetime is flat. According to theoretical physicist Kip Thorne, “It is extremely useful, in relativity research, to have both paradigms at one’s fingertips. Some problems are solved most easily and quickly using the curved spacetime paradigm; others, using flat spacetime. Black hole problems . . . are most amenable to curved spacetime techniques; gravitational-wave problems . . . are most amenable to flat spacetime techniques.” (Black Holes and Time Warps). Whatever method provides the best results is what matters, not so much whether spacetime is really curved or not.

The question of the reality of mental models in science is particularly acute with regard to mathematical models. For many years, mathematicians have been debating whether or not the objects of mathematics are real, and they have yet to arrive at a consensus. So, if an equation accurately predicts how natural phenomena behave, is it because the equation exists “out there” someplace? Or is it because the equation is just a really good mental model? Einstein himself argued that “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” By this, Einstein meant that it was possible to create perfectly certain mathematical models in the human mind; but that the matching of these models’ predictions to natural phenomenon required repeated observation and testing, and one could never be completely sure that one’s model was the final answer and therefore that it really objectively existed.

And even if mathematical models work perfectly in predicting the behavior of natural phenomena, there remains the question of whether the different components of the model really match to something in reality. As noted above, Newton’s model of gravity does a pretty good job of predicting motion — but the part of the model that describes gravity as a force is simply wrong. In mathematics, the set of numbers known as “imaginary numbers” are used by engineers for calculating electric current; they are used by 3D modelers; and they are used by physicists in quantum mechanics, among other applications. But that doesn’t necessarily mean that imaginary numbers exist or correspond to some real quantity — they are just useful components of an equation.

A great many scientists are quite upfront about the fact that their models may not be an accurate reflection of reality. In their view, the purpose of science is to predict the behavior of natural phenomena, and as long as science gets better and better at this, it is less important if models are proved to be a mismatch to reality. Brian Koberlein, an astrophysicist at the Rochester Institute of Technology, writes that scientific theories should be judged by the quality and quantity of their predictions, and that theories capable of making predictions can’t be proved wrong, only replaced by theories that are better at predicting. For example, he notes that the caloric theory of heat, which posited the existence of an invisible fluid within materials, was quite successful in predicting the behavior of heat in objects, and still is at present. Today, we don’t believe such a fluid exists, but we didn’t discard the theory until we came up with a new theory that could predict better. The caloric theory of heat wasn’t “proven wrong,” just replaced with something better. Koberlein also points to Newton’s conception of gravity, which is still used today because it is simpler than Einstein’s and “good enough” at predicting in most cases. Koberlein concludes that for these reasons, Einstein will “never” be wrong — we just may find a theory better at predicting.

Stephen Hawking has discussed the problem of truly knowing reality, and notes that it perfectly possible to have different theories with entirely different conceptual frameworks that work equally well at predicting the same phenomena. In a fanciful example, Hawking notes that goldfish living in a curved bowl will see straight-line movement outside the bowl as being curved, but despite this it would still be possible for goldfish to develop good predictive theories. He notes that likewise, human beings may also have a distorted picture of reality, but we are still capable of building good predictive models. Hawking calls his philosophy “model-dependent realism”:

According to model-dependent realism, it is pointless to ask whether a model is real, only whether it agrees with observation. If there are two models that both agree with observation, like the goldfish’s model and ours, then one cannot say that one is more real than the other. One can use whichever model is more convenient in the situation under consideration. (The Grand Design, p. 46)

So if science consists of belief systems/mental models, which may contain fictions or falsehoods, how exactly does science differ from religion?

Well for one thing, science far excels religion in providing good predictive models. If you want to know how the universe began, how life evolved on earth, how to launch a satellite into orbit, or how to build a computer, religious texts offer virtually nothing that can help you with these tasks. Neil deGrasse Tyson is absolutely correct about the failure of religion in this respect.  Traditional stories of the earth’s creations, as found in the Bible’s book of Genesis, were useful first attempts to understand our origins, but they have been long-eclipsed by contemporary scientific models, and there is no use denying this.

What religion does offer, and science does not, is a transcendent picture of how we ought to live our lives and an interpretation of life’s meaning according to this transcendent picture. The behavior of natural phenomena can be predicted to some extent by science, but human beings are free-willed. We can decide to love others or love ourselves above others. We can seek peace, or murder in the pursuit of power and profit. Whatever we decide to do, science can assist us in our actions, but it can’t provide guidance on what we ought to do. Religion provides that vision, and if these visions are imaginative, so are many aspects of scientific models. Einstein himself, while insisting that science was the pursuit of objective knowledge, also saw a role for religion in providing a transcendent vision:

[T]he scientific method can teach us nothing else beyond how facts are related to, and conditioned by, each other.The aspiration toward such objective knowledge belongs to the highest of which man is capabIe, and you will certainly not suspect me of wishing to belittle the achievements and the heroic efforts of man in this sphere. Yet it is equally clear that knowledge of what is does not open the door directly to what should be. . . . Objective knowledge provides us with powerful instruments for the achievements of certain ends, but the ultimate goal itself and the longing to reach it must come from another source. . . .

To make clear these fundamental ends and valuations, and to set them fast in the emotional life of the individual, seems to me precisely the most important function which religion has to perform in the social life of man.

Now fundamentalists and atheists might both agree that rejecting the truth of sacred scripture with regard to the big bang and evolution tends to undermine the transcendent visions of religion. But the fact of the matter is that scientists never reject a mental model simply because parts of the model may be fictional or false; if the model provides useful guidance, it is still a valid part of human knowledge.

The Mythos of Mathematics

‘Modern man has his ghosts and spirits too, you know.’

‘What?’

‘Oh, the laws of physics and of logic . . . the number system . . . the principle of algebraic substitution. These are ghosts. We just believe in them so thoroughly they seem real.’

Robert Pirsig, Zen and the Art of Motorcycle Maintenance

 

It is a popular position among physicists that mathematics is what ultimately lies behind the universe. When asked for an explanation for the universe, they point to numbers and equations, and furthermore claim that these numbers and equations are the ultimate reality, existing objectively outside the human mind. This view is known as mathematical Platonism, after the Greek philosopher Plato, who argued that the ultimate reality consisted of perfect forms.

The problem we run into with mathematical Platonism is that it is subject to some of the same skepticism that people have about the existence of God, or the gods. How do we know that mathematics exists objectively? We can’t sense mathematics directly; we only know that it is a useful tool for dealing with reality. The fact that math is useful does not prove that it exists independently of human minds. (For an example of this skepticism, see this short video).

Scholars George Lakoff and Rafael Nunez, in their book Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being, offer the provocative and fascinating thesis that mathematics consists of metaphors. That is, the abstractions of mathematics are ultimately grounded in conceptual comparisons to concrete human experiences. In the view of Lakoff and Nunez, all human ideas are shaped by our bodily experiences, our senses, and how these senses react to our environment. We try to make sense of events and things by comparing them to our concrete experiences. For example, we conceptualize time as a limited resource (“time is money”); we conceptualize status or mood in terms of space (happy is “up,” while sad is “down”); we personify events and things (“inflation is eating up profits,” “we must declare war on poverty”). Metaphors are so prevalent and taken for granted, that most of the time we don’t even notice them.

Mathematical systems, according to Lakoff and Nunez, are also metaphorical creations of the human mind. Since human beings have common experiences with space, time, and quantities, our mathematical systems are similar. But we do have a choice in the metaphors we use, and that is where the creative aspect of mathematics comes in. In other words, mathematics is grounded in common experiences, but mathematical conceptual systems are creations of the imagination. According to Lakoff and Nunez, confusion and paradoxes arise when we take mathematics literally and don’t recognize the metaphors behind mathematics.

Lakoff and Nunez point to a number of common human activities that subsequently led to the creation of mathematical abstractions. The collection of objects led to the creation of the “counting numbers,” otherwise known as “natural numbers.” The use of containers led to the notion of sets and set theory. The use of measuring tools (such as the ruler or yard stick) led to the creation of the “number line.” The number line in turn was extended to a plane with x and y coordinates (the “Cartesian plane“). Finally, in order to understand motion, mathematicians conceptualized time as space, plotting points in time as if they were points in space — time is not literally the same as space, but it is easier for human beings to measure time if it is plotted on a spatial graph.

Throughout history, while the counting numbers have been widely accepted, there have been controversies over the creation of other types of numbers. One of the reasons for these controversies is the mistaken belief that numbers must be objectively real rather than metaphorical. So the number zero was initially controversial because it made no sense to speak literally of having a collection of zero objects. Negative numbers were even more controversial because it’s impossible to literally have a negative number of objects. But as the usefulness of zero and negative numbers as metaphorical expressions and in performing calculations became clear, these numbers became accepted as “real” numbers.

The metaphor of the measuring stick/number line, according to Lakoff and Nunez, has been responsible for even more controversy and confusion. The basic problem is that a line is a continuous object, not a collection of objects. If one makes an imaginative metaphorical leap and envisions the line as a collection of objects known as segments or points, that is very useful for measuring the line, but a line is not literally a collection of segments or points that correspond to objectively existing numbers.

If you draw three points on a piece of paper, the sum of the collection of points clearly corresponds to the number three, and only the number three. But if you draw a line on a piece of paper, how many numbers does it have? Where do those numbers go? The answer is up to you, depending on what you hope to measure and how much precision you want. The only requirement is that the numbers are in order and the length of the segments is consistently defined. You can put zero on the left side of the line, the right side of the line, or in the middle. You can use negative numbers or not use negative numbers. The length of the segments can be whatever you want, as long as the definitions of segment length are consistent.

The number line is a great mental tool, but it does not objectively exist, outside of the human mind. Neglecting this fact has led to paradoxes that confounded the ancient Greeks and continue to mystify human beings to this day. The first major problem arose when the Greeks attempted to determine the ratio of the sides of a particular polygon and discovered that the ratio could not be expressed as a ratio of whole numbers, but rather as an infinite, nonrepeating decimal. For example, a right triangle with two shorter sides of length 1 would, according to the Pythagorean theorem, have a hypotenuse length equivalent to the square root of 2, which is an infinite decimal: 1.41421356. . .  This scandalized the ancient Greeks at first, because many of them had a religious devotion to the idea that whole numbers existed objectively and were the ultimate basis of reality. Nevertheless, over time the Greeks eventually accepted the so-called “irrational numbers.”

Perhaps the most famous irrational number is pi, the measure of the ratio between the circumference of a circle and its diameter: 3.14159265. . . The fact that pi is an infinite decimal fascinates people to no end, and scientists have calculated the value of pi to over 13 trillion digits. But the digital representation of pi has no objective existence — it is simply a creation of the human imagination based on the metaphor of the measuring stick / number line. There’s no reason to be surprised or amazed that the ratio of the circumference of a circle to its diameter is an infinite decimal; lines are continuous objects, and expressing lines as being composed of discrete objects known as segments is bound to lead to difficulties eventually. Moreover, pi is not necessary for the existence of circles. Even children are perfectly capable of drawing circles without knowing the value of pi. If children can draw circles without knowing the value of pi, why should the universe need to know the value of pi? Pi is simply a mental tool that human beings created to understand the ratio of certain line lengths by imposing a conceptual framework of discrete segments on a continuous quantity. Benjamin Lee Buckley, in his book The Continuity Debate, underscores this point, noting that one can use discrete tools for measuring continuity, but that truly continuous quantities are not really composed of discrete objects.

It is true that mathematicians have designated pi and other irrational numbers as “real” numbers, but the reality of the existence of pi outside the human mind is doubtful. An infinitely precise pi implies infinitely precise measurement, but there are limits to how precise one can be in reality, even assuming absolutely perfect measuring instruments. Although pi has been calculated to over 13 trillion digits, it is estimated that only 39 digits are needed to calculate the volume of the known universe to the precision of one atom! Furthermore, the Planck length is the smallest measurable length in the universe. Although quite small, the Planck length sets a definite limit on how precise pi can be in reality. At some point, depending on the size of the circle one creates, the extra digits in pi are simply meaningless.

Undoubtedly, the number line is an excellent mental tool. If we had perfect vision, perfect memory, and perfect eye-hand coordination, we wouldn’t need to divide lines into segments and count how many segments there are. But our vision is imperfect, our memories fallible, and our eye-hand coordination is imperfect. That is why we need to use versions of the number line to measure things. But we need to recognize that we are creating and imposing a conceptual tool on reality. This tool is metaphorical and, while originating in human experience, it is not reality itself.

Lakoff and Nunez point to other examples of metaphorical expressions in mathematics, such as the concept of infinity. Mathematicians discuss the infinitely large, the infinitely small, and functions in calculus that come infinitely close to some designated limit. But Lakoff and Nunez point out that the notion of actual (literal) infinity, as opposed to potential infinity, has been extremely problematic, because calculating or counting infinity is inherently an endless process. Lakoff and Nunez argue that envisioning infinity as a thing, or the result of a completed process, is inherently metaphorical, not literal. If you’ve ever heard children use the phrase “infinity plus one!” in their taunts, you can see some of the difficulties with envisioning infinity as a thing, because one can simply take the allegedly completed process and start it again. Oddly, even professional mathematicians don’t agree on the question of whether “infinity plus one” is a meaningful statement. Traditional mathematics says that infinity plus one is still infinity, but there are more recent number systems in which infinity plus one is meaningful. (For a discussion of how different systems of mathematics arrive at different answers to the same question, see this post.)

Nevertheless, many mathematicians and physicists fervently reject the idea that mathematics comes from the human mind. If mathematics is useful for explaining and predicting real world events, they argue, then mathematics must exist in objective reality, independent of human minds. But why is it important for mathematics to exist objectively? Isn’t it enough that mathematics is a useful mental tool for describing reality? Besides, if all the mathematicians in the world stopped all their current work and devoted themselves entirely to proving the objective existence of mathematical objects, I doubt that they would succeed, and mathematical knowledge would simply stop progressing.

Scientific Revolutions and Relativism

Recently, Facebook CEO Mark Zuckerberg chose Thomas Kuhn’s classic The Structure of Scientific Revolutions for his book discussion group. And although I don’t usually try to update this blog with the most recent controversy of the day, this time I can’t resist jumping on the Internet bandwagon and delving into this difficult, challenging book.

To briefly summarize, Kuhn disputes the traditional notion of science as one of cumulative growth, in which Galileo and Kepler build upon Copernicus, Newton builds upon Galileo and Kepler, and Einstein builds upon Newton. This picture of cumulative growth may be accurate for periods of “normal science,” Kuhn writes, when the community of scientists are working from the same general picture of the universe. But there are periods when the common picture of the universe (which Kuhn refers to as a “paradigm”) undergoes a revolutionary change. A radically new picture of the universe emerges in the community of scientists, old words and concepts obtain new meanings, and scientific consensus is challenged by conflict between traditionalists and adherents of the new paradigm. If the new paradigm is generally successful in solving new puzzles AND solving older puzzles that the previous paradigm solved, the community of scientists gradually moves to accept the new paradigm — though this often requires that stubborn traditionalists eventually die off.

According to Kuhn, science as a whole progressed cumulatively in the sense that science became better and better at solving puzzles and predicting things, such as the motions of the planets and stars. But the notion that scientific progress was bringing us closer and closer to the Truth, was in Kuhn’s view highly problematic. He felt there was no theory-independent way of saying what was really “out there” — conceptions of reality were inextricably linked to the human mind and its methods of perceiving, selecting, and organizing information. Rather than seeing science as evolving closer and closer to an ultimate goal, Kuhn made an analogy to biological evolution, noting that life evolves into higher forms, but there is no evidence of a final goal toward which life is heading. According to Kuhn,

I do not doubt, for example, that Newton’s mechanics improves on Aristotle’s and that Einstein’s improves on Newton’s as instruments for puzzle-solving. But I can see in their succession no coherent direction of ontological development. On the contrary, in some important respects, though by no means all, Einstein’s general theory of relativity is closer to Aristotle’s than either of them is to Newton’s. (Structure of Scientific Revolutions, postscript, pp. 206-7.)

This claim has bothered many. In the view of Kuhn’s critics, if a theory solves more puzzles, predicts more phenomena to a greater degree of accuracy, the theory must be a more accurate picture of reality, bringing us closer and closer to the Truth. This is a “common sense” conclusion that would seem to be irrefutable. One writer in Scientific American comments on Kuhn’s appeal to “relativists,” and argues:

Kuhn’s insight forced him to take the untenable position that because all scientific theories fall short of absolute, mystical truth, they are all equally untrue. Because we cannot discover The Answer, we cannot find any answers. His mysticism led him to a position as absurd as that of the literary sophists who argue that all texts — from The Tempest to an ad for a new brand of vodka — are equally meaningless, or meaningful. (“What Thomas Kuhn Really Thought About Scientific ‘Truth’“)

Many others have also charged Kuhn with relativism, so it is important to take some time to examine this charge.

What people seem to have a hard time grasping is what scientific theories actually accomplish. Scientific theories or models can in fact be very good at solving puzzles or predicting outcomes without being an accurate reflection of reality — in fact, in many cases theories have to be unrealistic in order to be useful! Why? A theory must accomplish several goals, but some of these goals are incompatible, requiring a tradeoff of values. For example, the best theories generalize as much as possible, but since there are exceptions to almost every generalization, there is a tradeoff between generalizability and accuracy. As Nancy Cartwright and Ronald Giere have pointed out, the “laws of physics” have many exceptions when matched to actual phenomena; but we cherish the laws of physics because of their wide scope: they subsume millions of observations under a small number of general principles, even though specific cases usually don’t exactly match the predictions of any one law.

There is also a tradeoff between accuracy and simplicity. Complete accuracy in many cases may require dozens of complex calculations; but most of the time, complete accuracy is not required, so scientists go with the simplest possible principles and calculations. For example, when dealing with gravity, Newton’s theory is much simpler than Einstein’s, so scientists use Newton’s equations until circumstances require them to use Einstein’s equations. (For more on theoretical flexibility, see this post.)

Finally, there is a tradeoff between explanation and prediction. Many people assume that explanation and prediction are two sides of the same coin, but in fact it is not only possible to predict outcomes without having a good causal model, sometimes focusing on causation gets in the way of developing a good predictive model. Why? Sometimes it’s difficult to observe or measure causal variables, so you build your model using variables that are observable and measurable even if those variables are merely associated with certain outcomes and may not cause those outcomes. To choose a very simple example, a model that posits that a rooster crowing leads to the rising of the sun can be a very good predictive model while saying nothing about causation. And there are actually many examples of this in contemporary scientific practice. Scientists working for the Netflix corporation on improving the prediction of customers’ movie preferences have built a highly valuable predictive model using associations between certain data points, even though they don’t have a true causal model. (See Galit Shmueli, “To Explain or Predict” in Statistical Science, 2010, vol. 25, no. 3)

Not only is there no single, correct way to make these value tradeoffs, it is often the case that one can end up with multiple, incompatible theories that deal with the same phenomena, and there is no obvious choice as to which theory is best. As Kuhn has pointed out, new theories become widely accepted among the community of scientists only when the new theory can account for anomalies in the old theory AND yet also conserve at least most of the predictions of the old theory. Even so, it is not long before even newer theories come along that also seem to account for the same phenomena equally well. Is it relativism to recognize this fact? Not really. Does the reality of multiple, incompatible theories mean that every person’s opinion is equally valid? No. There are still firm standards in science. But there can be more than one answer to a problem. The square root of 1,000,000 can be 1000 or -1000. That doesn’t mean that any answer to the square root of 1,000,000 is valid!

Physicist Stephen Hawking and philosopher Ronald Giere have made the analogy between scientific theories and maps. A map is an attempt to reduce a very large, approximately spherical, three dimensional object — the earth — to a flat surface. There is no single correct way to make a map, and all maps involve some level of inaccuracy and distortion. If you want accurate distances, the areas of the land masses will be inaccurate, and vice versa. With a small scale, you can depict large areas but lose detail. If you want to depict great detail, you will have to make a map with a larger scale. If you want to depict all geographic features, your map may become so cluttered with detail it is not useful, so you have to choose which details are important — roads, rivers, trees, buildings, elevation, agricultural areas, etc. North can be “up” on your map, but it does not have to be. In fact, it’s possible to make an infinite number of valid maps, as long as they are useful for some purpose. That does not mean that anyone can make a good map, that there are no standards. Making good maps requires knowledge and great skill.

As I noted above, physicists tend to prefer Newton’s theory of gravity rather than Einstein’s to predict the motion of celestial objects because it is simpler. There’s nothing wrong with this, but it is worth pointing out that Einstein’s picture of gravity is completely different from Newton’s. In Newton’s view, space and time are separate, absolute entities, space is flat, and gravity is a force that pulls objects away from the straight lines that the law of inertia would normally make them follow. In Einstein’s view, space and time are combined into one entity, spacetime, space and time are relative, not absolute, spacetime is curved in the presence of mass, and when objects orbit a planet it is not because the force of gravity is overcoming inertia (gravity is in fact a “fictitious force“), but because objects are obeying the law of inertia by following the curved paths of spacetime! In terms of prediction, Einstein’s view of gravity offers an incremental improvement to Newton’s, but Einstein’s picture of gravity is so radically different, Kuhn was right in seeing Einstein’s theory as a revolution. But scientists continue to use Newton’s theory, because it mostly retains the value of prediction while excelling in the value of simplicity.

Stephen Hawking explains why science is not likely to progress to a single, “correct” picture of the universe:

[O]our brains interpret the input from our sensory organs by making a model of the world. When such a model is successful at explaining events, we tend to attribute to it, and the elements and concepts that constitute it, the quality of reality or absolute truth. But there may be different ways in which one could model the same physical situation, with each employing different fundamental elements and concepts. If two such physical theories or models accurately predict the same events, one cannot be said to be more real than the other; rather we are free to use whichever model is more convenient.  (The Grand Design, p. 7)

I don’t think this is “relativism,” but if people insist that it is relativism, it’s not Kuhn who is the guilty party. Kuhn is simply exposing what scientists do.

What Are the Laws of Nature? – Part Two

In a previous post, I discussed the mysterious status of the “laws of nature,” pointing out that these laws seem to be eternal, omnipresent, and possessing enormous power to shape the universe, although they have no mass and no energy.

There is, however, an alternative view of the laws of nature proposed by thinkers such as Ronald Giere and Nancy Cartwright, among others. In this view, it is a fallacy to suppose that the laws of nature exist as objectively real entities — rather, what we call the laws of nature are simplified models that the human mind creates to explain and predict the operations of the universe. The laws were created by human beings to organize information about the cosmos. As such, the laws are not fully accurate descriptions of how the universe actually works, but generalizations; and like nearly all generalizations, there are numerous exceptions when the laws are applied to particular circumstances. We retain the generalizations because they excel at organizing and summarizing vast amounts of information, but we should never make the mistake of assuming that the generalizations are real entities. (See Science Without Laws and How the Laws of Physics Lie.)

Consider one of the most famous laws of nature, Isaac Newton’s law of universal gravitation. According to this law, the gravitational relationship between any two bodies in the universe is determined by the size (mass) of the two bodies and their distance from each other. More specifically, any two bodies in the universe attract each other with a force that is (1) directly proportional to the product of their masses and (2) inversely proportional to the square of the distance between them.  The equation is quite simple:

F = G \frac{m_1 m_2}{r^2}\

where F is the force between two masses, G is a gravitational constant, m1 and m2 are the masses of the two bodies and r is the distance between the center of the two bodies.

Newton’s law was quite valuable in helping predict the motions of the planets in our solar system, but in some cases the formula did not quite match to astronomical observations. The orbit of the planet Mercury in particular never fit Newton’s law, no matter how much astronomers tried to fiddle with the law to get the right results. It was only when Einstein introduced his theory of relativity that astronomers could correctly predict the motions of all the planets, including Mercury. Why did Einstein’s theory work better for Mercury? Because as the planet closest to the sun, Mercury is most affected by the massive gravitation of the sun, and Newton’s law becomes less accurate under the conditions of massive gravitation.

Einstein’s equations for gravity are known as the “field equations,” and although they are better at predicting the motions of the planets, they are extremely complex — too complex really for many situations. In fact, physicist Stephen Hawking has noted that scientists still often use Newton’s law of gravity because it is much simpler and a good enough approximation in most cases.

So what does this imply about the reality of Newton’s law of universal gravitation? Does Newton’s law float around in space or in some transcendent realm directing the motions of the planets, until the gravitation becomes too large, and then it hands off its duties to the Einstein field equations? No, of course not. Newton’s law is an approximation that works for many, but not all cases. Physicists use it because it is simple and “good enough” for most purposes. When the approximations become less and less accurate, a physicist may switch to the Einstein field equations, but this is a human value judgment, not the voice of nature making a decision to switch equations.

One other fact is worth noting: in Newton’s theory, gravity is a force between two bodies. In Einstein’s theory, gravity is not a real force — what we call a gravitational force is simply how we perceive the distortion of the space-time fabric caused by massive objects. Physicists today refer to gravity as a “fictitious force.” So why do professors of physics continue to use Newton’s law and teach this “fictitious force” law to their students? Because it is simpler to use and still a good enough approximation for most cases. Newton’s law can’t possibly be objectively real — if it is, Einstein is wrong.

The school of thought known as “scientific realism” would dispute these claims, arguing that even if the laws of nature as we know them are approximations, there are still real, objective laws underneath these approximations, and as science progresses, we are getting closer and closer to knowing what these laws really are. In addition, they argue that it would be absurd to suppose that we can possibly make progress in technology unless we are getting better and better in knowing what the true laws are really like.

The response of Ronald Giere and Nancy Cartwright to the realists is as follows: it’s a mistake to assume that if our laws are approximations and our approximations are getting better and better that therefore there must be real laws underneath. What if nature is inherently so complex in its causal variables and sequences that there is no objectively real law underneath it all? Nancy Cartwright notes that engineers who must build and maintain technological devices never apply the “laws of nature” directly to their work without a great deal of tinkering and modifications to get their mental models to match the specific details of their device. The final blueprint that engineers may create is a highly specific and highly complex model that is a precise match for the device, but of very limited generalizability to the universe as a whole. In other words, there is an inherent and unavoidable tradeoff between explanatory power and accuracy. The laws of nature are valued by us because they have very high explanatory power, but specific circumstances are always going to involve a mix of causal forces that refute the predictions of the general law. In order to understand how two bodies behave, you not only need to know gravity, you need to know the electric charge of the two bodies, the nuclear force, any chemical forces, the temperature, the speed of the objects, and additional factors, some of which can never be calculated precisely. According to Cartwright,

. . . theorists tend to think that nature is well-regulated; in the extreme, that there is a law to cover every case. I do not. I imagine that natural objects are much like people in societies. Their behavior is constrained by some specific laws and by a handful of general principles, but it is not determined in detail, even statistically. What happens on most occasions is dictated by no law at all. . . . God may have written just a few laws and grown tired. We do not know whether we are living in a tidy universe or an untidy one. (How the Laws of Physics Lie, p. 49)

Cartwright makes it clear that she believes in causal powers in nature — it’s just that causal powers are not the same as laws, which are simply general principles for organizing information.

Some philosophers and scientists would go even further. They argue that science is able to develop and improve models for predicting phenomena, but the underlying nature of reality cannot be grasped directly, even if our models are quite excellent at predicting. This is because there are always going to be aspects of nature that are non-observable and there are often multiple theories that can explain the same phenomenon. This school of thought is known as instrumentalism.

Stephen Hawking appears to be sympathetic to such a view. In a discussion of his use of “imaginary time” to model how the universe developed, Hawking stated “a scientific theory is just a mathematical model we make to describe our observations: it exists only in our minds. So it is meaningless to ask: which is real, “real” or “imaginary” time? It is simply a matter of which is the more useful description.” (A Brief History of Time, p. 144) In a later essay, Hawking made the case for what he calls “model-dependent realism.” He argues:

it is pointless to ask whether a model is real, only whether it agrees with observation. If two models agree with observation, neither one can be considered more real than the other. A person can use whichever model is more convenient in the situation under consideration. . . . Each theory may have its own version of reality, but according to model-dependent realism, that diversity is acceptable, and none of the versions can be said to be more real than any other.

Hawking concludes that given these facts, it may well be impossible to develop a unified theory of everything, that we may have to settle for a diversity of models. (It’s not clear to me how Hawking’s “model-dependent realism” differs from instrumentalism, since they seem to share many aspects.)

Intuitively, we are apt to conclude that our progress in technology is proof enough that we are understanding reality better and better, getting closer and closer to the Truth. But it’s actually quite possible for science to develop better and better predictive models while still retaining very serious doubts and disputes about many fundamental aspects of reality. Among physicists and cosmologists today, there is still disagreement on the following issues: are there really such things as subatomic particles, or are these entities actually fields, or something else entirely?; is the flow of time an illusion, or is time the chief fundamental reality?; are there an infinite number of universes in a wider multiverse, with infinite versions of you, or is this multiverse theory a mistaken interpretation of uncertainty at the quantum level?; are the constants of the universe really constant, or do they sometimes change?; are mathematical objects themselves the ultimate reality, or do they exist only in the mind? A number of philosophers of science have concluded that science does indeed progress by creating more and better models for predicting, but they make an analogy to evolution: life forms may be advancing and improving, but that doesn’t mean they are getting closer and closer to some final goal.

Referring back to my previous post, I discussed the view that the “laws of nature” appear to exist everywhere and have the awesome power to shape the universe and direct the motions of the stars and planets, despite the fact that the laws themselves have no matter and no energy. But if the laws of nature are creations of our minds, what then? I can’t prove that there are no real laws behind the mental models that we create. It seems likely that there must be some such laws, but perhaps they are so complex that the best we can do is create simplified models of them. Or perhaps we must acknowledge that the precise nature of the cosmological order is mysterious, and any attempt to understand and describe this order must use a variety of concepts, analogies, and stories created by our minds. Some of these concepts, analogies, and stories are clearly better than others, but we will never find one mental model that is a perfect fit for all aspects of reality.

The Role of Emotions in Knowledge

In a previous post, I discussed the idea of objectivity as a method of avoiding subjective error.  When people say that an issue needs to be looked at objectively, or that science is the field of knowledge best known for its objectivity, they are arguing for the need to overcome personal biases and prejudices, and to know things as they really are in themselves, independent of the human mind and perceptions.  However, I argued that truth needs to be understood as a fruitful or proper relationship between subjects and objects, and that it is impossible to know the truth by breaking this relationship.

One way of illustrating the relationship between subjects and objects is by examining the role of human emotions in knowledge.  Emotions are considered subjective, and one might argue that although emotions play a role in the form of knowledge known as the humanities (art, literature, religion), emotions are either unnecessary or an impediment to knowledge in the sciences.  However, a number of studies have demonstrated that feeling plays an important role in cognition, and that the loss of emotions in human beings leads to poor decision-making and an inability to cope effectively with the real world.  Emotionless human beings would in fact make poor scientists.

Professor of Neuroscience Antonio Damasio, in his book Descartes’ Error: Emotion, Reason, and the Human Brain, describes several cases of human beings who lost the part of their brain responsible for emotions, either because of an accident or a brain tumor.  These persons, some of whom were previously known as shrewd and smart businessmen, experienced a serious decline in their competency after damage took place to the emotional center of their brains.  They lost their capacity to make good decisions, to get along with other people, to manage their time, or to plan for the future.  In every other respect, these persons retained their cognitive abilities — their IQs remained above normal and their personality tests resulted in normal scores.  The only thing missing was their capacity to have emotions.  Yet this made a huge difference.  Damasio writes of one subject, “Elliot”:

Consider the beginning of his day: He needed prompting to get started in the morning and prepare to go to work.  Once at work he was unable to manage his time properly; he could not be trusted with a schedule.  When the job called for interrupting an activity and turning to another, he might persist nonetheless, seemingly losing sight of his main goal.  Or he might interrupt the activity he had engaged, to turn to something he found more captivating at that particular moment.  Imagine a task involving reading and classifying documents of a given client.  Elliot would read and fully understand the significance of the material, and he certainly knew how to sort out the documents according to the similarity or disparity of their content.  The problem was that he was likely, all of a sudden, to turn from the sorting task he had initiated to reading one of those papers, carefully and intelligently, and to spend an entire day doing so.  Or he might spend a whole afternoon deliberating on which principle of categorization should be applied: Should it be date, size of document, pertinence to the case, or another?   The flow of work was stopped. (p. 36)

Why did the loss of emotion, which might be expected to improve decision-making by making these persons coldly objective, result in poor decision-making instead?  It might be expected that the loss of emotion would lead to failures in social relationships.  So why were these people unable to even effectively advance their self-interest?  According to Damasio, without emotions, these persons were unable to value, and without value, decision-making became hopelessly capricious or paralyzed, even with normal or above-normal IQs.  Damasio noted, “the cold-bloodedness of Elliot’s reasoning prevented him from assigning different values to different options, and made his decision-making landscape hopelessly flat.” (p. 51)

It is true that emotional swings can lead to very bad decisions — anger, depression, anxiety, even excessive joy — can lead to bad choices.  But the solution to this problem, according to Damasio, is to achieve the right emotional disposition, not to erase the emotions altogether.  One has to find the right balance or harmony of emotions.

Damasio describes one patient who, after suffering damage to the emotional center of his brain, gained one significant advantage: while driving to his appointment on icy roads, he was able to remain calm and drive safely, while other drivers had a tendency to panic when they skidded, leading to accidents.  However, Damasio notes the downside:

I was discussing with the same patient when his next visit to the laboratory should take place.  I suggested two alternative dates, both in the coming month and just a few days apart from each other.  The patient pulled out his appointment book and began consulting the calendar.  The behavior that ensued, which was witnessed by several investigators, was remarkable.  For the better part of a half-hour, the patient enumerated reasons for and against each of the two dates . . . Just as calmly as he had driven over the ice, and recounted that episode, he was now walking us through a tiresome cost-benefit analysis, an endless outlining and fruitless comparison of options and possible consequences.  It took enormous discipline to listen to all of this without pounding on the table and telling him to stop, but we finally did tell him, quietly, that he should come on the second of the alternative dates.  His response was equally calm and prompt.  He simply said, ‘That’s fine.’ (pp. 193-94)

So how would it affect scientific progress if all scientists were like the subjects Damasio studied, free of emotion, and therefore, hypothetically capable of perfect objectivity?  Well it seems likely that science would advance very slowly, at best, or perhaps not at all.  After all, the same tools for effective decision-making in everyday life are needed for the scientific enterprise as well.

As the French mathematician and scientist Henri Poincare noted, every time we look at the world, we encounter an immense mass of unorganized facts.  We don’t have the time to thoroughly examine all those facts and we don’t have the time to pursue experiments on all the hypotheses that may pop into our minds.  We have to use our intuition and best judgment to select the most important facts and develop the best hypotheses (Foundations of Science, pp. 127-30, 390-91).  An emotionless scientist would not only be unable to sustain the social interaction that science requires, he or she would be unable to develop a research plan, manage his or her time, or stick to a research plan.  An ability to perceive value is fundamental to the scientific enterprise, and emotions are needed to properly perceive and act on the right values.

Objectivity is Not Scientific

It is a common perception that objectivity is a virtue in the pursuit of knowledge, that we need to know things as they really are, independent of our mental conceptions and interpretations.  It is also a common perception that science is the form of knowledge that is the most objective, and that is why scientific knowledge makes the most progress.

Yet the principle of objectivity immediately runs into problems in the most famous scientific theory, Einstein’s theory of relativity.  According to relativity theory, there is no objective way to measure objects in space and time — these measures are always relative to observers depending on what velocity the objects and observers are travelling, and observers often end up with different measures for the same object as a result.  For example, objects travelling at a very high speed will appear to be shorter in length to outside observers that are parallel to the path of the object, a phenomenon known as length contraction.  In addition, time will move more slowly for an observer travelling at high speed than an observer travelling at a low speed.  This phenomenon is illustrated in the “twin paradox” — given a pair of twins, if one sets off in a high speed rocket, while the other stays on earth, the twin on the rocket will have aged more slowly than the twin on earth.  Finally, the sequence of two spatially-separated events, say Event A and Event B, will differ according to the position and velocity of the observer.  Some observers may see Event A occurring before Event B, others may see Event B occurring before Event A, and others will see the two events as simultaneous.  There is no objectively true sequence of events.

The theory of relativity does not say that everything is relative.  The speed of light, for example, is the same for all observers, whether they are moving at a fast speed toward a beam of light or away from a beam of light.  In fact, it was the absolute nature of light speed for all moving observers that led Einstein to conclude that time itself must be different for different observers.  In addition, for any two events that are causally-connected, the events must take place in the same sequence for all observers.  In other words, if Event A causes Event B, Event A must precede Event B for all observers.  So relativity theory sees some phenomena as different for different observers and others as the same for different observers.

Finally, the meaning of relativity in science is not that one person’s opinion is just as valid as anyone else’s.  Observers within the same frame of reference (say, multiple observers travelling together in the same vehicle) should agree on measurements of length and time for an outside object even if observers from other reference frames have different results.  If observers within the same vehicle don’t agree, then something is wrong — perhaps someone is misperceiving, or misinterpreting, or something else is wrong.

Nevertheless, if one accepts the theory of relativity, and this theory has been accepted by scientists for many decades now, one has to accept the fact that there is no objective measure of objects in space and time — it is entirely observer-dependent.  So why do many cling to the notion of objectivity as a principle of knowledge?

Historically, the goal of objectivity was proposed as a way to solve the problem of subjective error.  Individual subjects have imperfect perceptions and interpretations.  What they see and claim is fallible.  The principle of objectivity tries to overcome this problem by proposing that we need to evaluate objects as they are in themselves, in the absence of human mind.  The problem with this principle is that we can’t really step outside of our bodies and minds and evaluate an object.

So how do we overcome the problem of subjective error?  The solution is not to abandon mind, but to supplement it, by communicating with other minds, checking for individual error by seeing if others are getting different results, engaging in dialogue, and attempting to come to a consensus.  Observations and experiments are repeated many times by many different people before conclusions are established.  In this view, knowledge advances by using the combined power of thousands and thousands of minds, past and present.  It is the only way to ameliorate the problem of an incorrect relationship between subject and object and making that relationship better.

In the end, all knowledge, including scientific knowledge, is essentially and unalterably about the relationship between subjects and objects — you cannot find true knowledge by splitting objects from subjects any more than you can split H2O into its individual atoms of hydrogen and oxygen and expect to find water in the component parts.