Three Objections to the Penrose–Lucas Argument

Back to top page

The following is inspired by an exchange between myself and another user on this NaNoWriMo forum thread regarding Artificial Intelligence.

Waiting for JavaScript…

Background

The Penrose–Lucas argument states that, because humans are capable of knowing the truth of Gödel-unprovable statements, human thought is necessarily non-computable.
— Wikipedia: Orchestrated objective reduction#Penrose–Lucas argument

Gödel’s incompleteness theorems prove that no formal system of logic can be both consistent and complete.

In particular, Gödel’s second theorem says that no formal system of logic can prove its own consistency, unless it is inconsistent.

Gödel’s incompleteness theorems are closely related to the halting problem in computability theory. The question raised by the halting problem is: given an arbitrary computer program X to run, will X eventually come to an end, or will X run forever? Turing proved that there is no algorithm that can answer the question for every program X. As proofs are algorithmic – that is, they follow a sequence of steps, and the steps obey some list of rules – it follows that there must exist computer programs that will run forever but that cannot be proven to run forever. Thus, for some programs X, “X does not halt” is an example of a true statement that cannot be proven true: incompleteness.

The Penrose–Lucas argument claims that humans are able to see that certain formal systems of logic are actually consistent, and furthermore that this demonstrates that the human brain is not limited by Gödel’s incompleteness theorems. Penrose suggests that there exist physical systems which are deterministic yet not algorithmic, and that the brain is one of these non-algorithmic physical systems. If this were so, it would follow that the brain cannot be simulated by a computer, at least not as we currently conceive them, as computers are algorithmic.

Gödel’s first and second incompleteness theorems

Gödel’s first incompleteness theorem constructs a sentence, called “the Gödel sentence” (GF), for formal system F. GF is a statement within F about polynomials, sets, or whatever F’s basis is… but it is also a statement about F. In particular, GF is a statement that is true iff GF cannot be proved in F – that is, it can be rendered in English as “this sentence is not provable in F”.

This leads to Gödel’s second incompleteness theorem, which (as stated above) proves that F cannot prove the consistency of F (unless F is inconsistent). The second theorem is more powerful than the first, as the first follows immediately from the second: if F is inconsistent, then any sentence can be proved in F, including GF, therefore GF is false; however, if we assume axiomatically that F is consistent, we can construct a larger system F+Con(F) (rendered in English as F + “F is consistent”) that can prove that GF cannot be proved in F, and thus that GF is true.

Lucas’s original formulation of the argument was in terms of Gödel sentences being knowably true to humans, not in terms of formal systems being knowably consistent. However, outside of a few esoteric systems of logic, such as Robinson arithmetic, the two incompleteness theorems are generally equivalent. Thus, we phrase our arguments in terms of F (not) proving its own consistency, rather than in terms of F (not) proving its Gödel sentence GF.

ONE: Maybe Humans Use More Powerful Axioms?

When Gödel dropped his bombshell, the world of logic and rational argument were turned upside down, and it may have seemed to some, fleetingly, that science and mathematics would never be the same again. In the event, science, mathematics and daily life proceeded more or less normally in spite of it. To many scientists, Gödel’s theorem was regarded as little more than an obscure curiosity, one that could be shrugged aside as largely irrelevant to the real world. Now, following the work of Gregory Chaitin, that is no longer a tenable response. Chaitin greatly extended the sweep of Gödel’s basic insight, and re-cast the notion of incompleteness in a way that brings it much closer to the real world of computers and physical processes. A key step in his work is the recognition of a basic link between mathematical undecidability and randomness. Something is random if it has no pattern, no abbreviated description, in which case there is no algorithm shorter than the thing itself that captures its content. And a random fact is true for no reason at all; it is true “by accident,” so to speak. With this conceptual tool-kit, Chaitin was able to demonstrate that mathematics is shot-through with randomness. It’s there even in ordinary arithmetic! Mathematics, supposedly the epitome of logical orderliness, is exposed as harboring irreducible arbitrariness.
— Paul Davies, Foreword to Thinking about Gödel & Turing by Gregory Chaitin

When people these days say “logic”, they usually mean ZFC set theory or something built on top of it. ZFC is used so heavily because (a) it’s widely believed to be consistent, and (b) the statements it makes mostly line up with our intuitive ideas about what “truth” is.

(Other formal systems of logic exist — Peano arithmetic, propositional calculus, first-order and second-order predicate calculus, et al. — but the same Gödelian arguments apply to them as apply to ZFC, so they will not be considered separately.)

Suppose that humans can perceive the truth of the statements “ZFC is consistent”. This is nothing special, really. We can pick a new axiom X out of a hat such that ZFC+X can prove ZFC’s consistency. It’s not even hard. That said, some values of X are boring… but not all: for instance, if X = “there exists a weakly inaccessible cardinal”, then we can prove ZFC consistent and we gain the power to prove some new and interesting things about infinity.

That power comes at a cost, though: how do we know if ZFC+X is consistent? One could prove ZFC+X’s consistency by inventing a new axiom Y… but who says ZFC+X+Y is consistent? And so on. Each time you add an axiom, you’re making the system more powerful, giving yourself the power to prove more things true… but how can you be sure that you’re proving the “right” things? An axiom is just an assumption! You can’t just blindly assume that your assumptions are true! Unless X was redundant, ZFC+X rests on shakier ground than ZFC itself. You have to take an inventory of what ZFC+X can prove, compare it to what ZFC+Not(X) can prove, and decide which one you think feels “truthier”, i.e. which one is a better model of reality. Seasoned mathematicians can (and have!) spent decades debating which values of X feel “truthy”.

Some people have the bright idea of directly using X = “ZFC is consistent”, better known as Con(ZFC). One could then repeat this process, creating the theories ZFC+Con(ZFC), then ZFC+Con(ZFC)+Con(ZFC+Con(ZFC)), and so on forever. However, the theory that results from this process is no longer recursively enumerable. This means that the theory contains statements that are “true” but that cannot be derived from the axioms in a finite number of steps… in other words, statements that are true but not provable: incompleteness. We’re right back where we started.

And now, a quick tangent…

Chaitin’s constant Ω is a real number between 0 and 1, representing the probability that a randomly constructed computer program will halt. Ω is an example of a non-computable real number: it can be defined but it cannot be computed, because computing it would require solving the halting problem. Even so, we can learn some interesting facts about its value.

The exact value of Ω depends on the computer programming language being used, so let’s call the language L and the constant ΩL. Knowing the first n bits of ΩL allows one to solve the halting problem for all programs in L of length n or shorter. Given that, one might expect that we could learn nothing about ΩL’s value… but that expectation is wrong. We can prove the first few bits of ΩL!

“How many bits?”, you might ask. Well, that depends on L, as one would expect, but it also depends on which formal system of logic you specify. As Chaitin explains in his book, Thinking about Gödel & Turing, the number of bits of ΩL that you can prove in a formal system F for fixed L is a measurement of the Kolmogorov–Chaitin complexity K(F) held in the axioms of F.

What the hell does that mean? Well, it means that formal systems contain crystallized knowledge, stored in the form of axioms, and that the set of truths provable from F has a size that depends directly on the number of bits of information-theoretic entropy in F. Each digit of ΩL is random, meaning both that it contains maximum entropy and that it cannot be predicted algorithmically. (These two statements are equivalent in information theory.) If adding an axiom X to theory F grants the power to prove an additional k bits of ΩL, then K(X|F) ≥ k. However, your value for ΩL will be wrong if F+X contains a contradiction: garbage in, garbage out.

Tangent over.

Penrose (and Lucas before him) would have us believe that we can skip all this stuff with proof and axioms, and just directly perceive the truth straight off the piece of paper, using a magic truth sensor in our brain.

A more plausible scenario is that humans contain an in-born model of logic that’s more powerful than ZFC. What does “more powerful” mean? Higher in information entropy content, and therefore able to prove more theorems. Suppose then that when Penrose “perceives” the consistency of ZFC, what’s really going on is that his native axioms (NA) are more powerful than ZFC, and this grants him the power to formulate a mental proof of ZFC’s consistency. Now there’s the question of “is NA consistent?”, and that one is much harder to answer, especially because NA’s axioms are hardwired into the brain and aren’t available to Penrose for introspection.

To prove the consistency of NA, one might try building NA+Con(NA), then NA+Con(NA)+Con(NA+Con(NA)), and so on. (Hereafter NAx+Con(NAx).) However, this would represent an infinite number of assumptions and would thus require an infinite amount of true information to justify.

What’s more, there isn’t just one thing called “Con(NAx)”, for any of our values for x. When we’ve been saying “Con(F)” for some formal system F, it’s really just been shorthand for a very complex statement in F about a polynomial, or a set, or whatever F’s basis is… and there are multiple ways to “spell” Con(F), each of which proves something about a different polynomial, and each of which thus allows the proof of a different set of theorems! In particular, each “spelling” of Con(NAx) permits us to derive additional digits of ΩL in the NA basis — digits which we established are random! — and the actual digits we derive will depend on how we “spelled” Con(NAx)!

Thus, for Penrose’s argument to be right, Penrose would have to know an infinite amount of true information; at the very least, he should be able to derive the digits of ΩL. That’s not logically impossible… but it does seem pretty damn unlikely. And it definitely contradicts the laws of physics as we know them, because information entropy is tightly coupled to thermodynamic entropy and the Bekenstein bound imposes a maximum limit on the entropy for a given volume of space.

To summarize: Penrose and Lucas fail to convincingly make the leap from “I can tell that ZFC is consistent” to “I can tell if any formal system is consistent”. The former can be achieved with finite axioms and does not violate Gödel’s incompleteness theorems; the latter requires Penrose and Lucas to know a completed infinity of true facts (which is physically impossible, to the best of our knowledge). What’s more, those infinite facts are random: each one is true without any reason, explanation, or justification why.

TWO: Wait, Who Said Humans Are Consistent?

Some mathematicians assert that doubts about whether PA is consistent, and whether it can be proven to be consistent, are trivial and pointless, partly because this places all of mathematics in doubt. However, as to the triviality, much of the most interesting and profound mathematics of this century has been concerned with just such doubts. As to the number of proofs that are cast into doubt by the possibility of inconsistency in PA, the "perfect consistency or total gibberish" approach to formal systems evidently favored by many mathematicians is not really justified.
— Math Pages, “Is Arithmetic Consistent?

You’ve been wrong before, haven’t you? Come on, admit it. We’ve all been wrong about something, even if it was when you were a toddler. I guarantee that there’s at least one thing in your life that you once believed to be true but turned out not to be. It’s universal.

Okay, so you admitted that you’ve been wrong before. You know, back then. But you updated your beliefs, and now you’re a different person. A truer person. You were wrong then, but surely you’re not wrong now.

Ha! I was just pulling your leg! You’re still wrong about something, I’m sure. It’s okay, so am I, and so is every other human being who has ever existed.

Let’s talk about Gödel’s theorems again. If Steve walks up to you and says, in all seriousness, “Wrong? I am never wrong, for I am Steve!”, does that really increase your confidence that Steve is never wrong? No, didn’t think so. In fact, if there’s anything you can conclude from what Steve said, it’s that “Steve is never wrong” is wrong. And so it goes for formal logical systems: never trust the ones that say “you can trust me”.

Penrose would have us believe that Steve is right — that humans are consistent, i.e. we never believe false things. Now, “never” is a strong word. Couldn’t it simply be that we rarely believe false things?

3 is prime, 5 is prime, and 7 is prime. By induction, all the odd integers are prime.
— aphorism of unknown provenance

(The joke is in the difference between “induction” – reaching a conclusion from the evidence of finite examples – and “induction” – formally proving that sentence X0 is true and that sentence Xn implies sentence Xn+1.)

Now, most mathematicians run away in horror at the suggestion that their favorite formal system might be inconsistent. After all, it’s a fairly elementary fact that, if you can prove one false thing, then you can use modus ponens to prove any false thing. But that disregards a property called “local consistency”: the practical reality is that, if your system is mostly consistent — that is, the rules you use most frequently are consistent with each other — then your system will yield valid proofs most of the time. Sure, you can use the system to prove an absurdity… but if you don’t already know where the contradiction lies, then your odds of tripping over it by accident are slim. Any formal system that humans use regularly has this local consistency property, because otherwise we would have noticed the absurdity already.

Hey, wait a minute. “Mostly consistent? Correct most of the time?” That sounds like us!

“ZFC was right once, ZFC was right twice, ZFC was right three times. By induction, ZFC is always right!”

To summarize: humans are occasionally wrong. We get by because our wrongness is merely occasional, and usually that’s good enough. But being mistaken even once means that humans are logically inconsistent. Gödel’s theorem doesn’t apply to us, because we’re willing to guess (which necessarily means believing a falsehood now and then).

THREE: Where Are the Hyper-Gödelian Computers?

Computing, as we understand it today, is based on algorithms — that is, reaching an answer by following a series of logical steps. A leads to B, B leads to C, and so on. To formalize what was meant by an algorithm, Turing proposed a very simple model of a computer: a spool of paper tape arranged in squares, a tape head that can read, write, and erase the square of tape directly beneath it, a tiny memory equivalent to exactly one square of tape, and a fixed list of rules telling the tape head what to do next. This simple machine is capable of carrying out any possible algorithm. Everything from a lowly calculator up to a supercomputer can be reduced to this model, called the “deterministic Turing machine” (DTM).

Computer scientists sometimes imagine how powerful a computer would be if it could guess the right answer instantly. But a magic box that always gives the right answer isn’t very interesting… or very trustworthy. How would one know if the box were broken? So we limit our guessing computer by adding one requirement: it can only guess the correct answer if it can also provide a proof that the answer it guessed is right, and the proof has to be something that a DTM could check in a reasonable amount of time. The result: a “non-deterministic Turing machine” (NTM).

Even with this limitation, an NTM would be fantastically powerful: it could spit out a proof of nearly any mathematical conjecture you care to imagine, it could solve every logistical problem in the world, it could break any encryption, and so on. Goldbach conjecture got you down? Just have the NTM try to guess a proof. Want to solve the Traveling Salesman Problem? Let the NTM take a spin. Your business rival uses AES-256 encryption to protect their secrets? No problem, just tell the NTM a few facts about the file format, and the NTM will guess the correct key.

(Technically speaking, DTMs and NTMs both answer the same questions, but an NTM’s guessing power makes it vastly faster at reaching an answer. Questions that would take an exponential amount of time for a DTM to answer would be practical to ask of an NTM.)

Penrose proposes something far more than an NTM. He believes that there is a natural physical phenomenon that inerrantly arrives at true conclusions without intervening steps. In contrast with NTMs, Penrose’s proposed phenomenon is not required to provide a proof that it gave the right answer. If this phenomenon existed, it would be – with no exaggeration – unimaginably powerful.

Penrose’s phenomenon would not be confined to brains, of course. Whatever Lucas may have intended, Penrose’s argument is not a resurrection of vitalism, dualism, or the like: the argument proposes new physics, and new physics invite the strong possibility that we will figure out how to take advantage of them. Let’s say we can build a computer from Penrose’s phenomenon, and let’s call it a Penrose machine (PM).

Imagine a computer that could instantly give you the answer to any well-formed question! “How many words of French have ever been spoken?”, you ask? That’s easy: just give the PM a French dictionary and the starting conditions of the universe, and it can simulate a new universe, count the number of spoken French words, and return the answer to you instantly! Or maybe the PM is limited to yes/no answers; in that case, ask it if the answer is more than a billion words, more than a trillion, and so on, until you get a “no”, then back off and do a binary search. Yes, that will require bringing thousands of universes into existence, and each of those simulated universes will have the ability to build PMs of its own. But it’s okay, because the PM proposal removes all the limits on what computers can do. Every question with a yes/no answer could be answered by the PM, instantly — even questions that couldn’t be answered by a Turing machine with infinite time and infinite tape!

(In technical terms, Turing machines given infinite time and infinite tape are limited to answering decision problems from class R, the recursive language problems. Since the PM can dodge Gödel’s incompleteness theorems, the PM must be able to solve ALL, the class of all possible yes/no decision problems, and Penrose’s argument would have us believe that it can do so in a small – constant or polynomial – amount of time. The class R is already huge, but if R is an ant, then the class ALL is the whole universe. Most of the questions that fall in ALL - R are fairly esoteric, but there are a few real-world yes/no questions that we’d like to answer. Of those, most of them have to do with matrix math or Diophantine equations, but a very practical one is “Is there an airfare cheaper than $X from city Y to city Z?”.)

But… if the laws of physics permit such mind-boggling feats of logic, then why do we experience a universe where only DTMs are practical?

It’s not impossible that the laws of physics might be layered, so that deeper layers have access to more powerful models of computation. In fact, we know of one such layer: quantum physics is believed to be faster than a DTM but slower than an NTM, yet it’s much easier to build a DTM than it is to build a quantum computer. That said, how quantum physics adds up to classical physics (and thus the DTM) is widely recognized as an important question — a question that needs an answer. What’s more, we’ve made good inroads into answering that question, and in demonstrating why quantum physics necessarily simplifies to classical physics.

Penrose’s proposal, however, does nothing to answer how such a powerful force could ever bind itself into knots that resemble our reality. Because a PM would have no limitations on what questions it could answer, there is no “shape” to the reality it would necessarily create — no reason to expect that a PM would inherently limit itself to the Turing model of computation, as opposed to something more powerful or less powerful.

To summarize: Penrose fails to explain how his proposal would lead to a universe that looks like our own — and it is difficult to imagine how it ever could. Penrose’s claims are extraordinary, and therefore require extraordinary evidence – evidence which Penrose does not provide.

BONUS: Why Didn’t Evolution Take Advantage of This?

Penrose proposes that the (as-yet purely hypothetical) deeper theory uniting gravity with quantum mechanics will somehow be able to provide us with non-Gödelian truths. Moreover, he proposes that these physics are already active in our biology, accessible to our brains. There exists a particular cell structure, the microtubule, that he claims is the cellular version of the Penrose machine (PM) discussed above. This, he says, is the reason humans are able to perceive the consistency of formal logic systems like ZFC.

Given Penrose’s proposal, one might naïvely expect that microtubules are found in the human brain and nowhere else. That expectation would be wrong. Microtubules are found in all cell types in the human body. Indeed, they are found in all eukaryotic cells — from humans to worms to sponges to single-celled amoebae. Their function is fairly well-understood: they are a major component of the cytoskeleton, which allows the cell to change and hold its shape; and they act as a cellular “railroad” for motor proteins to grip as they move resources from one part of the cell to another. They play a particularly important role in cell division, forming the “mitotic spindle” which organizes the chromosomes, pulls them apart, and steers them into the daughter cells’ nuclei.

It’s not unheard of for evolution to assign multiple functions to a single structure… but having a single structure act as both a transportation network and a computer seems odd, to say the least. Especially since we have a fairly good idea of how microtubules are formed from their protein dimer subunits, how they are disassembled, and which motor proteins bind to them, and we haven’t seen anything that looks like it might be a molecule-encoded question floating toward a microtubule, or a molecule-encoded answer floating away from one. Microtubules don’t rearrange molecules; they just act as a toothed rail for motor proteins to ratchet along.

(And no, it’s not an electrical current or an ion flow or anything like that, either. We can stick electrical probes into individual cells and measure the electrical gradients of cellular organelles. That’s how we learned how mitochondria work. “Microtubules are doing something weird with Ca2+ ions” would have won someone a Nobel prize.)

Anyway, that’s not my point. My point is: given the unfathomable computational power of a PM, one would expect evolution to take better advantage of it.

Imagine a rabbit and a fox. The rabbit is hiding in its burrow, wondering if the fox is waiting outside today. It considers the conditions under which it has seen the fox in the past: the smell on the air, the sounds made by the birds (who may or may not see the fox), the current weather, and so on. It processes all this information in an instant, using the naturally occurring PMs in its brain cells to simulate countless universes, each of them containing a copy of the fox. The fox, both the real one and the countless simulations thereof, is itself spawning countless universes in its own mind, simulating the rabbit’s behavior, deciding whether it should wait by the burrow or try for easier prey today. Those simulations spawn simulations of their own, and so on to infinity, and in the blink of an eye, the rabbit and the fox reach their conclusion:

+++ Divide By Cucumber Error. Please Reinstall Universe And Reboot +++

Does this sound like real life? Infinitely intelligent predators trying to hunt infinitely intelligent prey? No, it doesn’t. In real life, there’s an entire subfield of economics devoted to how individuals make decisions in the face of limited decision-making time, partial information, and finite computational power.

(Wait, economics? Yeah, economics. If you spend enough time around economists, everything is economics.)

The rabbit does the best it can with the information it has, but the rabbit instinctually knows that thinking takes time, and that time is precious. The rabbit weighs the pros and cons, the risks of leaving (the fox) vs the risks of staying put (hunger), and makes a choice that ultimately is some piecemeal approximation of an innate Bayesian decision theory. The rabbit steps outside, hears the fox step on a twig, and runs back to the burrow. The fox gives chase, catches the rabbit, and devours it.

Or consider the lowly amoeba. It’s a single-celled organism, but it does contain microtubules, so a reasonable reading of Penrose’s argument would suggest that it could perform any computation that a human can. An amoeba should be as smart as an octopus, then: capable of learning to associate stimuli with rewards, capable of studying its environment and hatching escape plans, capable perhaps of play, of thought, of consciousness. Treatment of amoebic dysentery with nitroimidazole would be a holocaust in miniature, snuffing out billions of conscious minds.

To summarize: if hyper-Gödelian phenomena were accessible to biology, you can be damn well sure that evolution would take advantage of it. Since Penrose’s microtubules would be ALL-powerful (pun intended), every plant, animal, mushroom, and protist would be as smart (and as conscious) as a human.


CC BY 4.0 “Three Objections to the Penrose–Lucas Argument” is Copyright © 2016 Donald King.
This work is licensed under a Creative Commons Attribution 4.0 International License.