What Does Science Explain? Part 3 – The Mythos of Objectivity

In parts one and two of my series “What Does Science Explain?,” I contrasted the metaphysics of the medieval world with the metaphysics of modern science. The metaphysics of modern science, developed by Kepler, Galileo, Descartes, and Newton, asserted that the only true reality was mathematics and the shape, motion, and solidity of objects, all else being subjective sensations existing solely within the human mind. I pointed out that the new scientific view was valuable in developing excellent predictive models, but that scientists made a mistake in elevating a method into a metaphysics, and that the limitations of the metaphysics of modern science called for a rethinking of the modern scientific worldview. (See The Metaphysical Foundations of Modern Science by Edwin Arthur Burtt.)

Early scientists rejected the medieval worldview that saw human beings as the center and summit of creation, and this rejection was correct with regard to astronomical observations of the position and movement of the earth. But the complete rejection of medieval metaphysics with regard to the role of humanity in the universe led to a strange division between theory and practice in science that endures to this day. The value and prestige of science rests in good part on its technological achievements in improving human life. But technology has a two-sided nature, a destructive side as well as a creative side. Aspects of this destructive side include automatic weaponry, missiles, conventional explosives, nuclear weapons, biological weapons, dangerous methods of climate engineering, perhaps even a threat from artificial intelligence. Even granting the necessity of the tools of violence for deterrence and self-defense, there remains the question of whether this destructive technology is going too far and slipping out of our control. So far the benefits of good technology have outweighed the hazards of destructive technology, but what research guidance is offered to scientists when human beings are removed from their high place in the universe and human values are separated from the “real” world of impersonal objects?

Consider the following question: Why do medical scientists focus their research on the treatment and cure of illness in humans rather than the treatment and cure of illness in cockroaches or lizards? This may seem like a silly question, but there’s no purely objective, scientific reason to prefer one course of research over another; the metaphysics of modern science has already disregarded the medieval view that humans have a privileged status in the universe. One could respond by arguing that human beings have a common self-interest in advancing human health through medical research, and this self-interest is enough. But what is the scientific justification for the pursuit of self-interest, which is not objective anyway? Without a recognition of the superior value of human life, medical science has no research guidance.

Or consider this: right now, astronomers are developing and employing advanced technologies to detect other worlds in the galaxy that may have life. The question of life on other planets has long interested astronomers, but it was impossible with older technologies to adequately search for life. It would be safe to say that the discovery of life on another planet would be a landmark development in science, and the discovery of intelligent life on another planet would be an astonishing development. The first scientist who discovered a world with intelligent life would surely win awards and fame. And yet, we already have intelligent life on earth and the metaphysics of modern science devalues it. In practice, of course, most scientists do value human life; the point is, the metaphysics behind science doesn’t, leaving scientists at a loss for providing an intellectual justification for a research program that protects and advances human life.

A second limitation of modern science’s metaphysics, closely related to the first, is its disregard of certain human sensations in acquiring knowledge. Early scientists promoted the view that only the “primary qualities” of mathematics, shape, size, and motion were real, while the “secondary qualities” of color, taste, smell, and sound existed only in the mind. This distinction between primary and secondary qualities was criticized at the time by philosophers such as George Berkeley, a bishop of the Anglican Church. Berkeley argued that the distinction between primary and secondary qualities was false and that even size, shape, and motion were relative to the perceptions and judgment of observers. Berkeley also opposed Isaac Newton’s theory that space and time were absolute entities, arguing instead that these were ideas rooted in human sensations. But Berkeley was disregarded by scientists, largely because Newton offered predictive models of great value.

Three hundred years later, Isaac Newton’s models retain their great value and are still widely used — but it is worth noting that Berkeley’s metaphysics has actually proved superior in many respects to Newton’s metaphysics.

Consider the nature of mathematics. For many centuries mathematicians believed that mathematical objects were objectively real and certain and that Euclidean geometry was the one true geometry. However, the discovery of non-Euclidean geometries in the nineteenth century shook this assumption, and mathematicians had to reconcile themselves to the fact that it was possible to create multiple geometries of equal validity. There were differences between the geometries in terms of their simplicity and their ability to solve particular problems, but no one geometry was more “real” than the others.

If you think about it, this should not be surprising. The basic objects of geometry — points, lines, and planes — aren’t floating around in space waiting for you to take note of them. They are concepts, creations of the human brain. We may see particular objects that resemble points, lines, and planes, but space itself has no visible content; we have to add content to it.  And we have a choice in what content to use. It is possible to create a geometry in which all lines are straight or all lines are curved; in which some lines are parallel or no lines are parallel;  or in which lines are parallel over a finite distance but eventually meet at some infinitely great distance. It is also possible to create a geometry with axioms that assume no lines, only points; or a geometry that assumes “regions” rather than points. So the notion that mathematics is a “primary quality” that exists within objects independent of human minds is a myth. (For more on the imaginary qualities of mathematics, see my previous posts here and here.)

But aside from the discovery of multiple mathematical systems, what has really killed the artificial distinction between “primary qualities,” allegedly objective, and “secondary qualities,” allegedly subjective, is modern science itself, particularly in the findings of relativity theory and quantum mechanics.

According to relativity theory, there is no single, objectively real size, shape, or motion of objects — these qualities are all relative to an observer in a particular reference frame (say, at the same location on earth, in the same vehicle, or in the same rocket ship). Contrary to some excessive and simplistic views, relativity theory does NOT mean that any and all opinions are equally valid. In fact, all observers within the same reference frame should be seeing the same thing and their measurements should match. But observers in different reference frames may have radically different measurements of the size, shape, and motion of an object, and there is no one single reference frame that is privileged — they are all equally valid.

Consider the question of motion. How fast are you moving right now? Relative to your computer or chair, you are probably still. But the earth is rotating at 1040 miles per hour, so relative to an observer on the moon, you would be moving at that speed — adjusting for the fact that the moon is also orbiting around the earth at 2288 miles per hour. But also note that the earth is orbiting the sun at 66,000 miles per hour, our solar system is orbiting the galaxy at 52,000 miles per hour, and our galaxy is moving at 1,200,000 miles per hour; so from the standpoint of an observer in another galaxy you are moving at a fantastically fast speed in a series of crazy looping motions. Isaac Newton argued that there was an absolute position in space by which your true, objective speed could be measured. But Einstein dismissed that view, and the scientific consensus today is that Einstein was right — the answer to the question of how fast you are moving is relative to the location and speed of the observer.

The relativity of motion was anticipated by the aforementioned George Berkeley as early as the eighteenth century, in his Treatise Concerning the Principles of Human Knowledge (paragraphs 112-16). Berkeley’s work was subsequently read by the physicist Ernest Mach, who subsequently influenced Einstein.

Relativity theory also tells us that there is no absolute size and shape, that these also vary according to the frame of reference of an observer in relation to what is observed. An object moving at very fast speeds relative to an observer will be shortened in length, which also affects its shape. (See the examples here and here.) What is the “real” size and shape of the object? There is none — you have to specify the reference frame in order to get an answer. Professor Richard Wolfson, a physicist at Middlebury College who has a great lecture series on relativity theory, explains what happens at very fast speeds:

An example in which length contraction is important is the Stanford Linear Accelerator, which is 2 miles long as measured on Earth, but only about 3 feet long to the electrons moving down the accelerator at 0.9999995c [nearly the speed of light]. . . . [Is] the length of the Stanford Linear Accelerator ‘really’ 2 miles? No! To claim so is to give special status to one frame of reference, and that is precisely what relativity precludes. (Course Guidebook to Einstein’s Relativity and the Quantum Revolution, Lecture 10.)

In fact, from the perspective of a light particle (a photon), there is infinite length contraction — there is no distance and the entire universe looks like a point!

The final nail in the coffin of the metaphysics of modern science is surely the weird world of quantum physics. According to quantum physics, particles at the subatomic level do not occupy only one position at a particular moment of time but can exist in multiple positions at the same time — only when the subatomic particles are observed do the various possibilities “collapse” into a single outcome. This oddity led to the paradoxical thought experiment known as “Schrodinger’s Cat” (video here). The importance of the “observer effect” to modern physics is so great that some physicists, such as the late physicist John Wheeler, believed that human observation actually plays a role in shaping the very reality of the universe! Stephen Hawking holds a similar view, arguing that our observation “collapses” multiple possibilities into a single history of the universe: “We create history by our observation, rather than history creating us.” (See The Grand Design, pp. 82-83, 139-41.) There are serious disputes among scientists about whether uncertainties at the subatomic level really justify the multiverse theories of Wheeler and Hawking, but that is another story.

Nevertheless, despite the obsolescence of the metaphysical premises of modern science, when scientists talk about the methods of science, they still distinguish between the reality of objects and the unreality of what exists in the mind, and emphasize the importance of being objective at all times. Why is that? Why do scientists still use a metaphysics developed centuries ago by Kepler, Galileo, and Newton? I think this practice persists largely because the growth of knowledge since these early thinkers has led to overspecialization — if one is interested in science, one pursues a degree in chemistry, biology, or physics; if one is interested in metaphysics, one pursues a degree in philosophy. Scientists generally aren’t interested in or can’t understand what philosophers have to say, and philosophers have the same view of scientists. So science carries on with a metaphysics that is hundreds of years old and obsolete.

It’s true that the idea of objectivity was developed in response to the very real problem of the uncertainty of human sense impressions and the fallibility of the conclusions our minds draw in response to those sense impressions. Sometimes we think we see something, but we don’t. People make mistakes, they may see mirages; in extreme cases, they may hallucinate. Or we see the same thing but have different interpretations. Early scientists tried to solve this problem by separating human senses and the human mind from the “real” world of objects. But this view was philosophically dubious to begin with and has been refuted by science itself. So how do we resolve the problem of mistaken and differing perceptions and interpretations?

Well, we supplement our limited senses and minds with the senses and minds of other human beings. We gather together, we learn what others have perceived and concluded, we engage in dialogue and debate, we conduct repeated observations and check our results with the results of others. If we come to an agreement, then we have a tentative conclusion; if we don’t agree, more observation, testing, and dialogue is required to develop a picture that resolves the competing claims. In some cases we may simply end up with an explanation that accounts for why we come up with different conclusions — perhaps we are in different locations, moving at different speeds, or there is something about our sensory apparatus that causes us to sense differently. (There is an extensive literature in science about why people see colors differently due to the nature of the eye and brain.)

Central to the whole process of science is a common effort — but there is also the necessity of subduing one’s ego, acknowledging that not only are there other people smarter than we are, but that the collective efforts of even less-smart people are greater than our own individual efforts. Subduing one’s ego is also required in order to prepare for the necessity of changing one’s mind in response to new evidence and arguments. Ultimately, the search for knowledge is a social and moral enterprise. But we are not going to succeed in that endeavor by positing a reality separate from human beings and composed only of objects. (Next: Part 4)

The Role of Emotions in Knowledge

In a previous post, I discussed the idea of objectivity as a method of avoiding subjective error.  When people say that an issue needs to be looked at objectively, or that science is the field of knowledge best known for its objectivity, they are arguing for the need to overcome personal biases and prejudices, and to know things as they really are in themselves, independent of the human mind and perceptions.  However, I argued that truth needs to be understood as a fruitful or proper relationship between subjects and objects, and that it is impossible to know the truth by breaking this relationship.

One way of illustrating the relationship between subjects and objects is by examining the role of human emotions in knowledge.  Emotions are considered subjective, and one might argue that although emotions play a role in the form of knowledge known as the humanities (art, literature, religion), emotions are either unnecessary or an impediment to knowledge in the sciences.  However, a number of studies have demonstrated that feeling plays an important role in cognition, and that the loss of emotions in human beings leads to poor decision-making and an inability to cope effectively with the real world.  Emotionless human beings would in fact make poor scientists.

Professor of Neuroscience Antonio Damasio, in his book Descartes’ Error: Emotion, Reason, and the Human Brain, describes several cases of human beings who lost the part of their brain responsible for emotions, either because of an accident or a brain tumor.  These persons, some of whom were previously known as shrewd and smart businessmen, experienced a serious decline in their competency after damage took place to the emotional center of their brains.  They lost their capacity to make good decisions, to get along with other people, to manage their time, or to plan for the future.  In every other respect, these persons retained their cognitive abilities — their IQs remained above normal and their personality tests resulted in normal scores.  The only thing missing was their capacity to have emotions.  Yet this made a huge difference.  Damasio writes of one subject, “Elliot”:

Consider the beginning of his day: He needed prompting to get started in the morning and prepare to go to work.  Once at work he was unable to manage his time properly; he could not be trusted with a schedule.  When the job called for interrupting an activity and turning to another, he might persist nonetheless, seemingly losing sight of his main goal.  Or he might interrupt the activity he had engaged, to turn to something he found more captivating at that particular moment.  Imagine a task involving reading and classifying documents of a given client.  Elliot would read and fully understand the significance of the material, and he certainly knew how to sort out the documents according to the similarity or disparity of their content.  The problem was that he was likely, all of a sudden, to turn from the sorting task he had initiated to reading one of those papers, carefully and intelligently, and to spend an entire day doing so.  Or he might spend a whole afternoon deliberating on which principle of categorization should be applied: Should it be date, size of document, pertinence to the case, or another?   The flow of work was stopped. (p. 36)

Why did the loss of emotion, which might be expected to improve decision-making by making these persons coldly objective, result in poor decision-making instead?  It might be expected that the loss of emotion would lead to failures in social relationships.  So why were these people unable to even effectively advance their self-interest?  According to Damasio, without emotions, these persons were unable to value, and without value, decision-making became hopelessly capricious or paralyzed, even with normal or above-normal IQs.  Damasio noted, “the cold-bloodedness of Elliot’s reasoning prevented him from assigning different values to different options, and made his decision-making landscape hopelessly flat.” (p. 51)

It is true that emotional swings can lead to very bad decisions — anger, depression, anxiety, even excessive joy — can lead to bad choices.  But the solution to this problem, according to Damasio, is to achieve the right emotional disposition, not to erase the emotions altogether.  One has to find the right balance or harmony of emotions.

Damasio describes one patient who, after suffering damage to the emotional center of his brain, gained one significant advantage: while driving to his appointment on icy roads, he was able to remain calm and drive safely, while other drivers had a tendency to panic when they skidded, leading to accidents.  However, Damasio notes the downside:

I was discussing with the same patient when his next visit to the laboratory should take place.  I suggested two alternative dates, both in the coming month and just a few days apart from each other.  The patient pulled out his appointment book and began consulting the calendar.  The behavior that ensued, which was witnessed by several investigators, was remarkable.  For the better part of a half-hour, the patient enumerated reasons for and against each of the two dates . . . Just as calmly as he had driven over the ice, and recounted that episode, he was now walking us through a tiresome cost-benefit analysis, an endless outlining and fruitless comparison of options and possible consequences.  It took enormous discipline to listen to all of this without pounding on the table and telling him to stop, but we finally did tell him, quietly, that he should come on the second of the alternative dates.  His response was equally calm and prompt.  He simply said, ‘That’s fine.’ (pp. 193-94)

So how would it affect scientific progress if all scientists were like the subjects Damasio studied, free of emotion, and therefore, hypothetically capable of perfect objectivity?  Well it seems likely that science would advance very slowly, at best, or perhaps not at all.  After all, the same tools for effective decision-making in everyday life are needed for the scientific enterprise as well.

As the French mathematician and scientist Henri Poincare noted, every time we look at the world, we encounter an immense mass of unorganized facts.  We don’t have the time to thoroughly examine all those facts and we don’t have the time to pursue experiments on all the hypotheses that may pop into our minds.  We have to use our intuition and best judgment to select the most important facts and develop the best hypotheses (Foundations of Science, pp. 127-30, 390-91).  An emotionless scientist would not only be unable to sustain the social interaction that science requires, he or she would be unable to develop a research plan, manage his or her time, or stick to a research plan.  An ability to perceive value is fundamental to the scientific enterprise, and emotions are needed to properly perceive and act on the right values.

The Role of Imagination in Science, Part 3

In previous posts (here and here), I argued that mathematics was a product of the human imagination, and that the test of mathematical creations was not how real they were but how useful or valuable they were.

Recently, Russian mathematician Edward Frenkel, in an interview in the Economist magazine, argued the contrary case.  According to Frenkel,

[M]athematical concepts and ideas exist objectively, outside of the physical world and outside of the world of consciousness.  We mathematicians discover them and are able to connect to this hidden reality through our consciousness.  If Leo Tolstoy had not lived we would never had known Anna Karenina.  There is no reason to believe that another author would have written that same novel.  However, if Pythagoras had not lived, someone else would have discovered exactly the same Pythagoras theorem.

Dr. Frenkel goes on to note that mathematical concepts don’t always match to physical reality — Euclidean geometry represents an idealized three-dimensional flat space, whereas our actual universe has curved space.  Nevertheless, mathematical concepts must have an objective reality because “these concepts transcend any specific individual.”

One problem with this argument is the implicit assumption that the human imagination is wholly individualistic and arbitrary, and that if multiple people come up with the same idea, this must demonstrate that the idea exists objectively outside the human mind.  I don’t think this assumption is valid.  It’s perfectly possible for the same idea to be invented by multiple people independently.  Surely if Thomas Edison never lived, someone else would have invented the light bulb.   Does that mean that the light bulb is not a true creation of the imagination, that it was not invented but always existed “objectively” before Edison came along and “discovered” it?  I don’t think so.  Likewise with modern modes of ground transportation, air transportation, manufacturing technology, etc.  They’re all apt to be imagined and invented by multiple people working independently; it’s just that laws on copyright and patent only recognize the first person to file.

It’s true that in other fields of human knowledge, such as literature, one is more likely to find creations that are truly unique.  Yes, Anna Karenina is not likely to be written by someone else in the absence of Tolstoy.  However, even in literature, there are themes that are universal; character names and specific plot developments may vary, but many stories are variations on the same theme.  Consider the following story: two characters from different social groups meet and fall in love; the two social groups are antagonistic toward each other and would disapprove of the love; the two lovers meet secretly, but are eventually discovered; one or both lovers die tragically.  Is this not the basic plot of multiple stories, plays, operas, and musicals going back two thousand years?

Dr. Frenkel does admit that not all mathematical concepts correspond to physical reality.  But if there is not a correspondence to something in physical reality, what does it mean to say that a mathematical concept exists objectively?  How do we prove something exists objectively if it is not in physical reality?

If one looks at the history of mathematics, there is an intriguing pattern in which the earliest mathematical symbols do indeed seem to point to or correspond to objects in physical reality; but as time went on and mathematics advanced, mathematical concepts became more and more creative and distant from physical reality.  These later mathematical concepts were controversial among mathematicians at first, but later became widely adopted, not because someone proved they existed, but because the concepts seemed to be useful in solving problems that could not be solved any other way.

The earliest mathematical concepts were the “natural numbers,” the numbers we use for counting (1, 2, 3 . . .).  Simple operations were derived from these natural numbers.  If I have two apples and add three apples, I end up with five apples.  However, the number zero was initially controversial — how can nothing be represented by something?  The ancient Greeks and Romans, for all of their impressive accomplishments, did not use zero, and the number zero was not adopted in Europe until the Middle Ages.

Negative numbers were also controversial at first.  How can one have “negative two apples” or a negative quantity of anything?  However, it became clear that negative numbers were indeed useful conceptually.  If I have zero apples and borrow two apples from a neighbor, according to my mental accounting book, I do indeed have “negative two apples,” because I owe two apples to my neighbor.  It is an accounting fiction, but it is a useful and valuable fiction.  Negative numbers were invented in ancient China and India, but were rejected by Western mathematicians and were not widely accepted in the West until the eighteenth century.

The set of numbers known explicitly as “imaginary numbers” was even more controversial, since it involved a quantity which, when squared, results in a negative number.  Since there is no known number that allows such an operation, the imaginary numbers were initially derided.  However, imaginary numbers proved to be such a useful conceptual tool in solving certain problems, they gradually became accepted.   Imaginary numbers have been used to solve problems in electric current, quantum physics, and envisioning rotations in three dimensions.

Professor Stephen Hawking has used imaginary numbers in his own work on understanding the origins of the universe, employing “imaginary time” in order to explore what it might be like for the universe to be finite in time and yet have no real boundary or “beginning.”  The potential value of such a theory in explaining the origins of the universe leads Professor Hawking to state the following:

This might suggest that the so-called imaginary time is really the real time, and that what we call real time is just a figment of our imaginations.  In real time, the universe has a beginning and an end at singularities that form a boundary to space-time and at which the laws of science break down.  But in imaginary time, there are no singularities or boundaries.  So maybe what we call imaginary time is really more basic, and what we call real is just an idea that we invent to help us describe what we think the universe is like.  But according to the approach I described in Chapter 1, a scientific theory is just a mathematical model we make to describe our observations: it exists only in our minds.  So it is meaningless to ask: which is real, “real” or “imaginary” time?  It is simply a matter of which is the more useful description.  (A Brief History of Time, p. 144.)

If you have trouble understanding this passage, you are not alone.  I have a hard enough time understanding imaginary numbers, let alone imaginary time.  The main point that I wish to underline is that even the best theoretical physicists don’t bother trying to prove that their conceptual tools are objectively real; the only test of a conceptual tool is if it is useful.

As a final example, let us consider one of the most intriguing of imaginary mathematical objects, the “hypercube.”  A hypercube is a cube that extends into additional dimensions, beyond the three spatial dimensions of an ordinary cube.  (Time is usually referred to as the “fourth dimension,” but in this case we are dealing strictly with spatial dimensions.)  A hypercube can be imagined in four dimensions, five dimensions, eight dimensions, twelve dimensions — in fact, there is no limit to the number of dimensions a hypercube can have, though the hypercube gets increasingly complex and eventually impossible to visualize as the number of dimensions increases.

Does a hypercube correspond to anything in physical reality?  Probably not.  While there are theories in physics that posit five, eight, ten, or even twenty-six spatial dimensions, these theories also posit that the additional spatial dimensions beyond our third dimension are curved up in very, very small spaces.  How small?  A million million million million millionth of an inch, according to Stephen Hawking (A Brief History of Time, p. 179).  So as a practical matter, hypercubes could exist only on the most minute scale.  And that’s probably a good thing, as Stephen Hawking points out, because in a universe with four fully-sized spatial dimensions, gravitational forces would become so sensitive to minor disturbances that planetary systems, stars, and even atoms would fly apart or collapse (pp. 180-81).

Dr. Frenkel would admit that hypercubes may not correspond to anything in physical reality.  So how do hypercubes exist?  Note that there is no limit to how many dimensions a hypercube can have.  Does it make sense to say that the hypercube consisting of exactly 32,458 dimensions exists objectively out there somewhere, waiting for someone to discover it?   Or does it make more sense to argue that the hypercube is an invention of the human imagination, and can have as many dimensions as can be imagined?  I’m inclined to the latter view.

Many scientists insist that mathematical objects must exist out there somewhere because they’ve been taught that a good scientist must be objective and dedicate him or herself to the discovery of things that exist independently of the human mind.  But there’re too many mathematical ideas that are clearly products of the human mind, and they’re too useful to abandon merely because they are products of the mind.

Objectivity is Not Scientific

It is a common perception that objectivity is a virtue in the pursuit of knowledge, that we need to know things as they really are, independent of our mental conceptions and interpretations.  It is also a common perception that science is the form of knowledge that is the most objective, and that is why scientific knowledge makes the most progress.

Yet the principle of objectivity immediately runs into problems in the most famous scientific theory, Einstein’s theory of relativity.  According to relativity theory, there is no objective way to measure objects in space and time — these measures are always relative to observers depending on what velocity the objects and observers are travelling, and observers often end up with different measures for the same object as a result.  For example, objects travelling at a very high speed will appear to be shorter in length to outside observers that are parallel to the path of the object, a phenomenon known as length contraction.  In addition, time will move more slowly for an observer travelling at high speed than an observer travelling at a low speed.  This phenomenon is illustrated in the “twin paradox” — given a pair of twins, if one sets off in a high speed rocket, while the other stays on earth, the twin on the rocket will have aged more slowly than the twin on earth.  Finally, the sequence of two spatially-separated events, say Event A and Event B, will differ according to the position and velocity of the observer.  Some observers may see Event A occurring before Event B, others may see Event B occurring before Event A, and others will see the two events as simultaneous.  There is no objectively true sequence of events.

The theory of relativity does not say that everything is relative.  The speed of light, for example, is the same for all observers, whether they are moving at a fast speed toward a beam of light or away from a beam of light.  In fact, it was the absolute nature of light speed for all moving observers that led Einstein to conclude that time itself must be different for different observers.  In addition, for any two events that are causally-connected, the events must take place in the same sequence for all observers.  In other words, if Event A causes Event B, Event A must precede Event B for all observers.  So relativity theory sees some phenomena as different for different observers and others as the same for different observers.

Finally, the meaning of relativity in science is not that one person’s opinion is just as valid as anyone else’s.  Observers within the same frame of reference (say, multiple observers travelling together in the same vehicle) should agree on measurements of length and time for an outside object even if observers from other reference frames have different results.  If observers within the same vehicle don’t agree, then something is wrong — perhaps someone is misperceiving, or misinterpreting, or something else is wrong.

Nevertheless, if one accepts the theory of relativity, and this theory has been accepted by scientists for many decades now, one has to accept the fact that there is no objective measure of objects in space and time — it is entirely observer-dependent.  So why do many cling to the notion of objectivity as a principle of knowledge?

Historically, the goal of objectivity was proposed as a way to solve the problem of subjective error.  Individual subjects have imperfect perceptions and interpretations.  What they see and claim is fallible.  The principle of objectivity tries to overcome this problem by proposing that we need to evaluate objects as they are in themselves, in the absence of human mind.  The problem with this principle is that we can’t really step outside of our bodies and minds and evaluate an object.

So how do we overcome the problem of subjective error?  The solution is not to abandon mind, but to supplement it, by communicating with other minds, checking for individual error by seeing if others are getting different results, engaging in dialogue, and attempting to come to a consensus.  Observations and experiments are repeated many times by many different people before conclusions are established.  In this view, knowledge advances by using the combined power of thousands and thousands of minds, past and present.  It is the only way to ameliorate the problem of an incorrect relationship between subject and object and making that relationship better.

In the end, all knowledge, including scientific knowledge, is essentially and unalterably about the relationship between subjects and objects — you cannot find true knowledge by splitting objects from subjects any more than you can split H2O into its individual atoms of hydrogen and oxygen and expect to find water in the component parts.