Table of Contents:
Chapter 6: The Crumbling of Certainty
Chapter VI: The Crumbling of Certainty
Truth without Certainty
If particle physics is the foundation of the reductionist program described in Chapter Three, then mathematics and formal logic is the bedrock upon which that foundation rests. To truly fulfill the Technological Program of complete control, we must first achieve a certainty of knowledge, so that results follow undeviatingly from expectations, conclusions from premises, and function from design. In this view, if a machine doesn’t work or a design fails, it can only be because some variable was left uncontrolled; i.e., the knowledge of initial conditions was incomplete. When every factor is accounted for, every variable measured, every force captured in a mathematical equation, then the predictability of a physical system is no less dependable than the mathematics that underlies it.
But how dependable is this mathematics? In the imaginary future of Leibniz and Laplace, where all linguistic meaning is fully precise and all of science fully mathematized, any dispute can be resolved by straightforward calculation, without doubt or controversy, and all the truths of nature will be laid bare. Mathematics, after all, is the epitome of certainty, in which conclusions are reached not by persuasion but by formal, deductive proof, indisputable unless logic itself is violated. But how do we know that mathematics is sound? How do we know that there are not hidden contradictions buried in the axioms of arithmetic? And equally important, how do we know that all truth can be reached starting from those axioms? As physics was placed on an axiomatic footing and more and more fields of knowledge appealed to mathematics for their legitimacy, these questions took on increasing urgency around the end of the 19th century.
The axiomatic method, which originated with Euclid, is implicit in today’s notion of scientific rigor. It starts with explicit definitions of terms to be used, and the assumptions one is operating from. After all, how can reasoning be sound if its very terms are ambiguous? You start from basic definitions and premises, and build from those. In mathematics the necessity of the axiomatic approach was highlighted by various paradoxes in set theory that demonstrated the ultimate incoherency of naïve (non-axiomatic) definitions of a set. For example, consider Russell’s Paradox: the set of all sets that are not members of themselves. Is that set a member of itself? By definition, if it is, then it isn’t, and if it isn’t then it is. Hence the necessity of axioms implicitly defining what is and is not a set. Axioms were also formulated for arithmetic: naïvely, we think we know what addition and multiplication are, but do we really? The axioms of arithmetic define them formally.
The program to put all of mathematics (and by implication, eventually all of science) on a firm axiomatic footing was articulated by the French mathematician David Hilbert, and its culmination was to be a proof that such axioms systems (particularly arithmetic) were both sound and complete. Sound, in that no contradictory results could arise from them; complete in that all true statements could be proven from them. At the time it seemed intuitively obvious: surely anything true is also provable. Else how would you know it were true? How could you differentiate it from any other assertion? In a sense, the entire Cartesian ambition to become the “lords and possessors of nature” hinged upon the completeness proof, for it would establish that no mystery, no truth, is beyond the purview of human logic. Starting with a formal axiom system, one can proceed according to the rules of logic to derive, mechanically and unthinkingly, all possible proofs from those axioms. A computer could do it. And even though computers didn’t exist in 1900, the mechanical nature of axiomatic proof seemed to promise that Leibniz’ vision would come true. To settle any dispute, and indeed to eventually arrive at all truth, we would but need to say, “Let us calculate.”
Imagine then the sense of bewilderment that followed Gödel’s famous incompleteness theorem of 1931, which destroyed any hope of ever completing Hilbert’s program. Usually presented in popular literature as demonstrating that “there exist true statements of arithmetic that cannot be proven from the axioms,” Gödel’s theorem is actually a bit more subtle than that. I will present the theorem here in a little more depth, because its subtleties have both direct and metaphoric consequences for the Scientific Program and Technological Program.
The divergence of truth and provability in Gödel’s theorem can only be understood in the context of the distinction between a formal theory and an interpretation of that theory. A formal theory is the set of all theorems deducible from a given set of axioms according to the usual rules of logic. Its interpretation is whatever real or abstract system the theory describes.
To take a familiar example, the geometrical axioms of Euclid generate a theory, one interpretation of which is the idealized lines, points, angles, and so forth drawn on a flat surface. One interpretation of the theory of arithmetic is the set of natural numbers we use to count, add, and multiply. Provability is a property of sentences written in the formal language of the theory; truth is a property of their counterparts in the real world. For example, in the formal language the sentence “”x”y x*y=0 Þ x=0 v y=0” is provable from the seven basic axioms of formal arithmetic (named Q), and its interpretation “If the product of two numbers is zero, then at least one of those numbers must be zero” is true in real-life arithmetic. That assertion seems quite obvious, but how can we be sure? How, in other words, can we prove it? Only by abstracting from the real-world example a theory, a set of definitions (“here’s what addition really means”) embedded in axioms.
But then the question arises, How can we know whether the theory really corresponds completely to the interpretation? Indeed, the seven axioms of Q are so minimal that it is impossible to prove basic arithmetical facts such as “”a”b a+b=b+a”. It is impossible to prove from Q that you can get to every number, eventually, by counting from zero. Such inconveniences are easily dealt with, however, by adding them in as new axioms. The ultimate goal of Hilbert’s Program, and indeed of the Scientific Program of complete understanding, would be to add in as axioms all unprovable statements whose interpretations are true. You would then have a complete axiomatization of reality, the ultimate conversion of nature to number. The separate human realm would finally encompass all of reality.
What Gödel proved was that this is impossible, that there is no way of adding enough axioms to prove every true sentence (i.e. every sentence whose interpretation is true). An infinity of axioms would be required—but even this is not the deepest problem. The problem is that there is no “effective procedure” to generate that infinity, no finite means to make the theory complete. You cannot say, “Let every sentence that is true in such-and-such an interpretation be an axiom,” because there is no way to tell what those sentences are.
In other words, there is necessarily something missing in any mathematical description of reality. There is no finite means to encompass all truth in a system of labels and quantities (which is essentially what a “formal system” is). Even without quantum indeterminacy, the Scientific Program is doomed to failure from the start. Doomed as well is the whole notion of reductionistic rationality as a sure guide to truth, the approach mentioned above of starting any problem by laying out rigorous definitions. By limiting knowledge to what can be proven, we exclude large swaths of the truth.
And it gets worse. The above state of affairs would be acceptable if the truths left inaccessible were unimportant ones, contrived sentences of arithmetic like the one Gödel constructed for his proof. But as soon became apparent through the work of Turing, Post, and more recently Gregory Chaitin, it is not just a few recondite corners of mathematics that are impervious to proof, but the vast majority of all mathematical facts.
The very idea of rational understanding is to reduce the complex to the simple, to find the “reasons” underlying things. The quintessential example of this is the reduction of the complex paths of planets in the sky to Newton’s universal law of motion. What Turing proved is that there are important mathematical questions that cannot be answered that way, but only along the lines of “because that’s the way it is.” His famous Halting Problem showed that there is no general means of determining whether a random Turing Machine (an idealized computer) will eventually halt given a certain input—no means, that is, except to actually try it out.[4] There are specific methods for some Turing Machines, but no universal method, no finite formula or set of instructions, nothing that could be programmed into a computer. There can be no general theory of why Turing Machines halt. Chaitin has extended this result even further to observe that almost all mathematical fact is unprovable, and even worse, that mathematical truth is random in the sense of algorithmic information theory.[5] There is no rhyme or reason to the truth; nothing that could be understood in the finite, standard terms required to bring reality into the human realm of control.
Mathematics has sealed the fate of the age-old attempt to substitute chaotic, unpredictable reality with a controllable artificial version of it. However fine our mapping of reality, however sophisticated our modeling, something will always be missing, and this limitation is inherent in the map itself. Of course a map, a set of definitions, an axiom system can be useful, but when we mistake it for the real thing then we are marooned in a finite world of our own making, a projection of our own assumptions, a tiny subset of Truth delimited from the very beginning by what we hold to be self-evident. As Chaitin puts it, “In other words, the current map of mathematics reflects what our tools are currently able to handle, not what is really out there.”[6] The danger is that blinded by our assumptions, we reject actual experiential data with the thought, “It isn’t true because it couldn’t be true.” This is precisely what has happened in many branches of science, which have become so mired in their principles that they cannot countenance “anomalous” phenomena no matter how well-documented.
We make an analogous mistake in everyday life whenever beliefs blind us to experience. Consider for example the shy teenager who is so convinced she is unattractive to boys that she is oblivious to their attentions, interpreting secret-admirer notes as mockery and compliments as sympathetic attempts to cheer her up.
The results of Gödel, Turing, and Chaitin imply a different way of pursuing knowledge that forgoes certainty in favor of utility. In the absence of irrefragable truths that exist “out there”, the inquirer’s relation to the world becomes paramount. In mathematics, the researcher can add new axioms onto a formal system, justifying them either with computational evidence or merely by the appeal of the results they can prove. In this way, subjectivity creeps back into mathematics. One body of theory derives from the axioms of set theory with the Axiom of Choice; another body of theory from the axioms without it; one body of theory derives from the addition of the Continuum Hypothesis as an axiom, another from the addition of its negation. In computation, many results follow from assuming that there is no polynomial-time algorithm for solving NP-hard problems, an assumption widely accepted based on computational evidence and the repeated frustration of mathematicians’ best efforts to find a polynomial-time algorithm.[7] In geometry, the inclusion of non-Euclidean axioms provides a tool for understanding curved space-time, just as the Euclidean axioms describe geometry on a flat surface. We can, playfully, try out different axiom systems to come up with different descriptions of reality, parts of reality, or aspects of the universe.
If, my dear reader, you have become lost in these complexities, let me emphasize the key point. It is simply that in mathematics as well as physics, there is not always a reason why. Sometimes things are true just because they are. Have you ever heard someone say, “Oh yeah? If it is true, then prove it!” Unconsciously we have learned to equate truth and provability, just as we have learned to value reason over intuition. Hilbert’s Program was just one manifestation of our craving for certainty, for an indisputable source of truth outside of ourselves. The same impulse underlies the ubiquitous elevation of “experts” in our society, and the giving over of more and more of our autonomy to external authorities. It also underlies many religious cults, in which certainty comes from the guru, as well as Christian fundamentalism, which looks to yet another external authority, the Bible, for an indisputable source of truth. The doctrine of Biblical inerrancy is the religious counterpart of the scientific ambition to axiomatize reality. Here is certainty! No longer is it necessary to look within oneself to know truth—it is all laid out in black and white. We are no longer divine creators of our world, only receivers, only consumers.
Today all hope of ever achieving such certainty is ended. Of course we can, like the Christian fundamentalist, the cult follower, or the dogmatic scientist, choose to remain ensconced in our axiom system and refuse to explore any truth that lays outside it. But such certitude comes at a high price: insularity, stagnation, and cut-off from new worlds of knowledge and experience. In fact, the crumbling of certainty is incredibly liberating. Its effects are similar to the effects of the failure of determinism and objectivity in the realm of physics. Truth, like being, ceases to be an independent quality separate from ourselves. Both begin to make sense only as a relationship. Divorced from logical certainty, divorced from proof, what can truth mean? The only satisfactory answer that I’ve found is that truth is a state of integrity. When faced with two different interpretations of an experience, instead of gathering more and more evidence to decide which is true, the new metaphor calls us to simply choose one or the other, depending on which fits with greater integrity into all that we are and, more importantly, all we strive to be. We create who we are through the truths we choose.
Does that statement sound dangerous to you? Does it seem to give license to play fast and loose with the truth, to ignore the evidence and blindly maintain a self-serving interpretation of reality? Does it allow us to justify anything we want by saying, “It’s my truth”?
Actually it has the opposite effect. When we choose truth consciously, in knowing self-definition, that choice takes on a gravity absent from ordinary reasons and justifications. If we understand truth as a creative choice, we will be all the more conscientious in choosing. Returning to our mathematical metaphor, truth is a property of an interpretation, not of proof, not of reasons. So in choosing a truth we are choosing an interpretation of our world; that interpretation, in turn, generates new experiences consistent with it. Our choice of the truths we live by has world-creating power.
Whether inside or outside of science, we might see the quest for truth not as an encompassing of more and more facts, not as a growing certainty about the world, but rather as a path of self-understanding and conscious creativity. When truth is, as in mathematics, often beyond certainty, beyond even reason, then how do we recognize it? How do we choose between a belief and its opposite? We are left with integrity, which we can clarify by asking, “Is that me? Is that the universe I choose to live in? Is that the reality I wish to create?” We are not discrete observers separate from the universe that we observe, able only to discover what is. We are creators.
Let me give you an example. Through a few personal experiences and extensive reading I have come into contact with many phenomena that conventional science does not accept. At some point I was faced with a crisis of belief, a choice between two different interpretations, each logically coherent. They went something like this: (1) My physical experience of qi was a fantasy induced by the unusual circumstances of the dojo and the culture shock of being in Taiwan. The hundreds of apparently normal, sincere, and humble practitioners were similarly deluded, except for those consciously conspiring in a hoax. Formerly respectable or even eminent figures like John Mack, Roger Woolger, and Barbara Brennan, whose books I’d read, had succumbed to some form of dementia. The many people of apparent integrity who’d shared stories with me of miraculous coincidences, inexplicable experiences, ghosts, and so on were actually putting me on, trying to seem special, hungry for attention, and I must be a poor judge of character. My life is full of dupes, frauds, hoaxers, liars, and the mentally unstable. Even my own wife lies to me for no apparent reason about extraordinary experiences that happened to her years ago. If I try hard enough, I can cobble together a belief system in which none of these “unscientific” occurrences ever really happens. (2) These experiences that I’ve had, that I’ve been told, and that I’ve read about are as real as any other. The people in my life are generally as they seem, and not pathologically plagued by confabulation, selective memory, and compulsive lying. John Mack did not write his books about alien abduction because being a Harvard professor of psychiatry wasn’t good enough and he wanted fame and notoriety. Eminent scientists do not usually throw their careers down the sewer in some vain pursuit of purely imaginary psi phenomena. And in my own experiences, I saw what I saw and felt what I felt. And, finally, believing all this I must also believe that the entire corpus of science is fundamentally incomplete.
Neither of these two interpretations suffers any internal logical inconsistency. Just like the shy girl, I can fit all the data into one of many interpretations, many universes. Even Occam’s Razor cannot always rescue me—it is usually “simpler” to discard inconvenient facts on some pretext. By choosing a truth, I am choosing what universe I will live in and making a statement to myself and the world about who I am. Sometimes I might even do this playfully, in a spirit of exploration and discovery, like when I spend days immersing myself in “skeptics'” writings, and notice how it changes my state of mind, emotions, and relationships. Usually, though, the progression from one belief-state to another is unconscious, subject to a logic and a process beyond my understanding. A truth that served me well in one stage of existence becomes obsolete as I move on to another. And so it is with all of us.
The work of Gödel and Turing has ended forever the Babelian program of taking nature by finite means. It has shown us the limits of reducing reality to label and number, and the impossibility of ever subjugating truth to certainty. Understood metaphorically, mathematical incompleteness hints at a new way of understanding truth, knowledge, and belief as a way of relating to the universe and defining who we are, a process of cocreation of reality, and not a mere unveiling and cataloging of what is already objectively out there. Certainty is gone, but in its place we have freedom.
[4] And even then you may never know for sure. It could very well be that running the Turing Machine for a billion iterations tells you only that it does not halt for the first billion iterations.
[5] Basically what that means is that given a random list of arbitrary mathematical truths, there is in general no shorter way to characterize that list: the list itself is its own shortest description. These results are presented in depth in Chaitin’s controversial classic, Information, Randomness and Incompleteness: Papers on Algorithmic Information Theory.
[6] Chaitin, Gregory, Meta Math! The Quest for Omega, Pantheon, 2005. p. 20
[7] Borwein, Jonathon and David Bailey Mathematics by Experiment, A.K. Peters, 2003. p. 4-5. There is a recent trend in mathematics toward “experimental mathematics” which forgoes the certainty of traditional analytic proof, and seeks insight instead through the use of computers. There may be certain basic truths that are inherently unprovable in present axiom systems; for example, the conjecture that all irrational algebraic numbers are Borel-normal. This has been computationally confirmed for some numbers up to trillions of digits. While at first glance experimental mathematics may seem just another form of empiricism applied to mathematics, and thus consistent with the Baconian assumption of an objective universe “out there”, the matter is actually extremely tricky. If the digits are in some sense random, in what sense do they exist or in what sense are they necessarily what they are, before they are calculated? This question is not trivial, but more involved ruminations are beyond the scope of this book. I refer the reader to the works of Gregory Chaitin for a philosophical discussion of related issues