One of the long-time cranks who's commented on this blog is a bozo who goes by the name "Vorlath". Vorlath is a hard-core Cantor crank, and so he usually shows up to rant whenever the subject of Cantor comes up. But last week, while I was busy dealing with the Scientopia site hosting trouble, a reader sent me a link to a piece Vorlath wrote about the Halting problem. Apparently, he doesn't like that proof either.

Personally, the proof that the halting problem is unsolvable is one of my all-time favorite mathematical proofs. It's incredibly simple - just a couple of steps. It's very *concrete* - the key to the proof is a program that you can actually write, easily, in a couple of lines of code in a scripting language. And best of all, it's incredibly *profound* - it proves something very similar to Gödel's incompleteness theorem. It's wonderful.

To show you how simple it is, I'm going to walk you through it - in all of its technical details.

Suppose I've got a program, $P$. I can treat the program P as a function, from its input to its output. Of course, not every program produces a result for every input. For example, I can write a program which takes an integer as a parameter, and never stops if its input is divisible by 2:

def RunForeverIfDivisibleByTwo(i): if i % 2 == 0: while True: continue else: return i / 2

The halting problem is basically a formal way of asking if you can tell whether or not an *arbitrary* program will eventually halt.

In other words, can you write a program called a halting oracle, `HaltingOracle(program, input)`

, which returns true if `program(input)`

would eventually halt, and which returns false if it wouldn't?

The answer is: no, you can't.

Why not? We can finally get to the proof. Suppose, just suppose, that you could write a halting oracle. Then you could run it on any program at all, and see if it eventually halts. So, run it on this:

def Deciever(i): oracle = i[0] in = i[1] if oracle(Deceiver, i): while True: continue else: return i

So, the input to `Deceiver`

is actually a list of two elements: the first one is a proposed halting oracle. The second is another input. What the halting killer does is ask the Oracle: "Do you think I'll halt for input i?". If the oracle says, "Yes, you'll halt", then the program goes into an infinite loop. If the oracle says "No, you won't halt", then it halts. So no matter what the oracle says, it's wrong.

So, if you come up with a halting oracle - a program that can tell whether or not other programs eventually halt - it will produce the *wrong* result for `Deceiver`

. And that means that it's *not* a halting oracle - because there's at least one program where it generates the wrong result.

That's it. That's the entire proof. And that little Python snippet up there? That's a *real* program that would work as a real counterexample for any real halting oracle!

See what I mean about it being very simple, and very concrete?

The reason that it's so profound is that you can formulate *any* mechanical proof process as a computer program. If a proof exists for a theorem, then you can write a program which searches for the proof, and will eventually find it. We *can* do that. We can write a program which searches the entire space of first order predicate logic proofs, trying to find a proof for a specific theorem. If it's a theorem of FOPL, it *will* eventually find a proof. It might take a very, very long time. But if a statement *isn't* a valid theorem, then the program will never stop. It will keep searching, and searching, and searching. And if the program doesn't stop, you can never actually conclude that "It's not a theorem", because if you just ran the program for another 10, or 100, or 1000 steps, it *might* stop.

If you could solve the halting problem, then you could take a theorem proving program for a specific logical domain, and given any statement, determine whether or not it was a provable theorem - by asking the halting oracle whether the program would halt on that input! (See what I mean about it being like Gödel?)

So - Vorlath doesn't like this proof. He thinks it's really no good. And so, he sets out to show that it's incorrect. And the way he does that is... by showing that a halting oracle will generate the wrong result for some inputs!

Seriously. He's basically taking the *validity* of the proof, and using it to conclude that there's something wrong with the proof.

The way that he does this is by claiming that the program - the `Deciever`

above - doesn't have a well-defined result. Because since we don't know what the halting oracle will produce, we don't know what the program will do. We don't know whether the program will halt or not, because we don't know whether the halting oracle will *say* that it will halt. It's got nothing to do with whether or not we can determine whether or not a given program will halt - the entire problem is that we don't know what the Oracle will answer. If we knew what the Oracle would answer, then we'd know whether the program would halt.

In his words:

The issue here is that this is not a proof by contradiction at all. In fact, we can show this quite clearly by using an Enabler program that does exactly what the F function says (including the Oracle).

Program Enabler(Function F) { If (!F(Enabler,F)) while(true); }The only thing that changed is that we negated the value returned by F. Here, the Enabler program does exactly what the Oracle says it will do when F is set to the Oracle program. Unfortunately, it is still undecidable because we still don't know which answer the Oracle will return. This is why the Deceiver program is indeterminate. It is not because of the bogus contradiction.

The problem with Vorlath's whole argument is that he doesn't seem to understand what the halting problem *is*. The question isn't "Can I look at a specific program, and figure out if it halts?" The question is, "*Can I write a program that can tell whether any other program will halt?*".

His argument basically says that *in the proof*, we, as humans reading the proof, can't say what the halting oracle will return, and that therefore, the halting program is indeterminate, and that that's the cause of the contradiction.

That's not it at all.

The halting problem is a question of whether or not *a program* can tell whether *any* program will halt. And the halting proof is remarkably concrete. Write a program that you believe will correctly determine whether or not other programs halt - and the deciever *will* work - it will produce a behavior different than what the oracle predicts. The oracle *will* be wrong for the deceiver. But if you take Vorlath's `Enabler`

, and run it through the oracle? It will work: it will do exactly what the Oracle says it will do. The oracle *will be right*. It doesn't matter what the oracle thinks `Enabler`

: in a real program, whatever the oracle answers is what `Enabler`

will do. Enabler isn't indeterminate: for any given oracle, it will do exactly what the oracle predicts. It may do different things for different oracles - but that's perfectly all right. It doesn't matter whether *we* as humans can look at it on paper, and tell whether it can halt. The question is, when the oracle produces a prediction - however it does it - will that prediction be correct? And if you run the enabler through a specific oracle? Its prediction will be correct; if you run the deceiver through a specific oracle, its prediction will be *incorrect*. That's the whole point.

Anyway... from here, Vorlath goes into a pointless sidetrack working through a specific table-based example that purportedly demonstrates the problem. This, again, demonstrates that he really doesn't even understand what the halting proof is supposed to prove. Remember, it proves that a halting oracle will always get the wrong answer for some programs. That's all it proves; that's all it has to prove. So, in his own words:

So what is going on? The issue is not about being able to calculate the halting status of a given function. It is solely about definitions. Can we define a value to be used both as input to a decision and as the conclusion of that decision? The answer is that it is foolish to rely on this. That's all the halting problem is about. We can still calculate the halting status of every input just as long as we don't attach meaning (definitions) to those inputs (especially if we try to relate those values to the halting status.

This couldn't possibly be more wrong. This has nothing to do with

definitions. And it doesn't have anything to do with being able to determine the halting status of a particular function. It has to do with whether *a program* can make a correct prediction about *another program*. It's about very concrete programs. He doesn't like the recursive nature of the solution - but *real computer programs can be fully recursive*. I *can* take a halting oracle program, and pass it to a halting oracle program. I can take a halting oracle, and use it as a parameter to a concrete deceiver program. You can't escape this trap.

In fact, there's even a whole fascinating research area that's built on this sort of meta-circularity, where you pass the code of a program as parameter to that program. There's a nifty idea called *partial evaluation*. In partial evaluation, you can take a program which is basically a sort of static program optimizer, and by running it *on itself*, you can turn an interpreter into a compiler. It actually *works*. It's a bit hard to follow. But it *works*. There are really people working on building compilers this way! There's a book on it, available here. And I'll probably write a post about it at some point.

So? Vorlath: moron. His counter to the halting proof? It's not a counter: it's an example of how the halting proof produces the right result. His distinction between definition and computation? Totally bogus.

On the other hand.. Alan Turing? Genuis. Halting proof? Really cool. Recursive meta-programming? Super cool.

Mark, I don't know if you're just trying to present a simplified version of Turing's proof, but as stated your proof doesn't work. Strictly speaking, 'HaltingOracle' should take two integer inputs, one being the Godel code of 'program' and the other being input. Let 'd' be the Godel code of the program 'Deceiver'. Then your line 'if oracle(Deceiver, i):' should be replaced by 'if oracle(d, i):'. But this is an impossible line to include in your program, since it's impossible to quote the Godel code of a program in the program itself.

This problem can be corrected, of course, as Turing did in his original proof. But I don't remember the exact details.

Actually, the Kleene Recursion Theorem says that you can take any two-argument program and replace it with a one-argument program that passes its own encoding as the first argument in a call to itself.

That is, you can manipulate a program so that it does in fact compute its own Godel code!

Visit the link http://www.scirp.org/Journal/PaperInformation.aspx?PaperID=63915

MarkCC missed the whole point of my arguments regarding FLT. I have answered all the questions raised about FLT in several publications including some recent books. If he wants his criticisms to have traction he should publish them in refutable journals, not blogs. For enlightenment, visit the link: http://www.scirp.org/Journal/PaperInformation.aspx?PaperID=63915

"So – Vorlath doesn’t like this proof. He thinks it’s really no good. And so, he sets out to show that it’s incorrect. And the way he does that is… by showing that a halting oracle will generate the wrong result for some inputs!

Seriously. He’s basically taking the validity of the proof, and using it to conclude that there’s something wrong with the proof."

My response to Vorlath? My C++ is a bit rusty so please forgive.

{ do while(still breathing)

facepalm;

headdesk;

}

To this non-programmer, the halting program seems similar to the Liar's paradox. In which case I guess the analogy to the 'halting problem' would be that the liar's paradox disproves the notion that the truth value of all statements can be definitively determined.

Dude, he's making the same mistake Russel made (only much dumber). He's thinking that if we just make it so the Halting Oracle is not allowed to be an input... but you could still always

clonethe Halting Oracle, or represent it in some different way, a la what Godel did to Russel's system.This is far dumber than Russel's error, of course, because what Russel failed to see was that any mathematical proof could be represented as a number. That's a rather ingenious observation. What Vorlath fails to see is that any computer program can be represented by, um, data in a computer. Oy.

Heh, it just occurred to me that if one rejects mind/body dualism, then the proof of the unsolvability of the Halting Problem also proves that no human could ever be a halting oracle either... You just imagine a computer program that emulates that human's behavior (never mind that it may be a practical impossibility to do so, if you believe in materialism then it's at least a theoretical possibility) and pass that program to the Deceiver.... D'oh!

I've wondered about this. Can the human mind answer the halting question in all cases? I would think yes if we extend the responses to include mu (or some other appropriate answer for non-yes and non-no situations), but then (being a bit of a materialist) I would imagine we could write the program to do so as well.

Can the human mind solve the halting problem? I don't know. I doubt it.

The halting problem is equivalent to a weaker version of Gödel's incompleteness theorem. Gödel showed, among other thigs, that there are statements in

first order logicthat are true, but which can't be proved using first order logic.The way that you show those statements are true is by using

second-order logic. But there is no decision procedure for second-order logic. In first-order, you can guarantee that a if a proof exists, a sufficiently powerful proof search will find a proof for a true statement in a finite (but unbounded) amount of time. That's *not* true in second-order.So in terms of a machine doing it, even if you allowed "undetermined" as an answer? It depends on how you handle undetermined. I mean, trivially, an oracle that

alwayssaid "I don't know" would be correct. But if you only allowed it to answer undetermined in cases like the deciever/enabler? Then no, you can't do it.Of course,

in practicehumans can solve the halting problem fairly well most of the time -- well, okay, as well as they can spot bugs in general, which I suppose we could argue that that's pretty bad, but still... my point being, the practical limits of a human's ability to solve the halting problem will be when their mental resources are exhausted and/or when it is so complex they are likely to make an error, not on any theoretical impossibility.Not that it isn't fun to discuss the theoretical (im)possibility! 🙂

Anyway, I'm stunned by people like Vorlath who reject proofs by contradiction -- especially very simple ones like this, or the ideal compression thing with that other guy. I mean... it's plain as day, to me. It bothers me intuitively, of course, but if there's a contradiction, there's a contradiction...

Actually, I think that that's not really true. In trivial example programs, we're really good at it. But for complex things, we're not nearly as good as we think we are.

But when youtake a complex program with complex inputs, and try to determine whether or not it halts? Not so easy at all. For example, in one project I worked on, I was trying to do a particular type of type unification. Given two possibly recursive types, can you tell when those two types are structurally equivalent? In our analyzer, several times, we got caught in infinite loops that we never would have predicted. They weren't explicit infinites; they were cases that would correspond in logic to instantiating quantifiers. We might need to instantiate an arbitrary number of times - but it should always have been finite. But there were cases where we didn't handle type recursion properly, and so we were instantiating an infinite number of times. The difference between code that correctly only instantiated a finite but unbounded number of times, and the code that instantiated it an infinite number of times was incredibly subtle - it was dependent on the way that the type description was encoded as input to an SHA hash.

Yeah, that's why I backed off to say "We're as good halting oracles as we are debuggers", so take that as you may 🙂

The difference between code that correctly only instantiated a finite but unbounded number of times, and the code that instantiated it an infinite number of times was incredibly subtle – it was dependent on the way that the type description was encoded as input to an SHA hash.sounds like a punchline for some XKCD of the future (if the guy hasn't already done one on this! :).

Why do you think the human mind can solve the halting problem?

The mind is limited by the brain. Make a program large enough, and it won't fit in your head. Make it long enough, and you'll die before you even finish reading it.

Ignoring the limitations of the brain, being able to solve the halting problem requires an oracle which evaluates some incomputable-to-turing-machines function. Why would you even begin to think the human mind met that criteria?

Keep in mind there are programs for which there is no possible proof that they halt, and no possible proof that they don't halt. I know that because we can write a program which enumerates all possible proofs, from smallest to largest, and stops when it finds a halt or not-halt proof. Combine that with the no-halting-solver result and think about it.

There's currently no reason to assume the mind is more powerful than a turing machine, at least in the reasoning-about-programs sense. Do you really believe you can do better than trying every possible proof? How would you even know if the result you divined was correct?

I suppose my fault is in thinking too specifically about the Deciever program example Mark gave. I can immediately tell where an oracle program would fail, but writing a program to detect why the oracle program would fail would be difficult.

Also, I wouldn't suggest that the mind is more powerful than a turing machine. But I was wondering whether, assuming we allowed another answer besides yes or no to the halting question, it would change the outcome.

A problem I'm having is imagining a program that I couldn't look at and tell. Your example is a good one, and I can see now what that program would look like.

To your last question, I suppose I'm thinking it's similar to the MIU puzzle. Starting from the axiom MI, it's impossible to get to MU. However, I don't have to enumerate the infinite possibilities to know that MU isn't in the set. But writing a program that could have solved that (without a priori knowledge of the MIU system) would be difficult. So I'm left wondering why I can solve it, but not write a program to solve it.

(And please don't misunderstand, in no way am I doubting the correctness of the halting problem proof. I'm really just curious as to why my intuition tells me human intelligence could answer the problem for all programs when a machine couldn't).

Unless the answers you allow include "I don't know", the problem will always be incomputable. There is actually a proof that, for any non-trivial output property, there is no program which can always decide if a given program has that type of output.

Even if the human mind was more powerful than a turing machine, it would not be able to produce any better proofs. Turing machines can enumerate all possible proofs. A halting oracle can be correct about whether a machine halts or not, but it can't always prove that fact.

You're making one big mistake in your argument.

You

can'twrite a program that enumerates every possible proof. You can only write a program that enumerates every possiblefirst orderproof.You can prove things in second-order logic that aren't provable in first. That's part of the trick of Gödel: you've got a statement that you know is true, because you can prove it with second-order logic. But in first order logic, you can formulate the statement without using any second-order terms - but you can't prove it.

So if, as a human, you had way of producing second-order proofs, you could generate halting proofs even if those halting proofs aren't computable by proof enumeration.

The question is, can you as a human produce proofs via some method that can't be expressed as a computer program? And the answer? No one really knows. Most materialists generally think that the mind is basically an elaborate computer, and so it's no more than turing complete. Some think that there's some sort of quantum effect that makes us somehow super-turing. And dualists/idealists sometimes argue that we're somehow something different, so that we're not really comparable to a purely mechanical device.

My own suspicion is that in terms of mechanics, we're Turing complete, but with a bloody huge, complex, and frequently buggy operating system :-).

I can absolutely write a program which enumerates every proof. I can prove it simply, too:

- It is possible to write a program which verifies that a proof is correct or incorrect

- Proofs are made up of a finite sequence of symbols

- It is possible to enumerate all finite sequences of symbols

- You can use your proof-verifying program to reject any sequence of symbols that are not correct proofs

- Therefore you can enumerate all proofs by enumeration all finite sequences of symbols, rejecting any that fail verification

Strilanc:

What you are overlooking (other than the question of how the program will evaluate each string of symbols) is that there is an infinite number of finite strings of symbols. The program will take some nonzero amount of time to evaluate each string. Therefore, it would take infinite time to evaluate all strings.

Report back when your program has run for the appropriate infinite time.

That's even aside from testing/debugging issues: writing a program that can correctly evaluate all strings is nontrivial. But it's not the core of this problem, or proof.

Is it really possible to do the first point? I thought it wasn't, in the general case.

The rest, of course, is true and Vicki's objection fails - such a program will output proofs, and while there are infinitely many proofs, there is no proof that will require an infinite amount of time to output, which means that we declare that program enumerates every proof. (presuming we have a full decider for proof correctness, which I'm not convinced on.)

After all, it wouldn't be useful to say "it's impossible to make a program that enumates the natural numbers"! A program is said to enumerate an infinite set iff it outputs only members of the set, and for all members of the set, there exists some time t at which that member will be output. So, if we have a program that outputs "0, 1, 2, ..." then the number n is output at time n, and thus that program outputs the set of natural numbers.

Likewise, a program can easily enumerate the set of all syntactically-valid proofs - I'm not as certain that it can enumerate the set of all semantically-valid proofs, but syntactically-valid? Most certainly.

(Note, by the way: It

ispossible for a program to enumerate all programs that halt! The problem is that it cannot be donein a useful order, and it's impossible to enumerate all programs that DO NOT halt.)I'm not sure what the 'mu' answer would really mean here. If I consider a certain Turing machine and an input and I declare that it's impossible to tell whether it halts or not, I'm really saying that it never halts. After all, if the machine will ever halt, you can prove that it will - by just running it until it halts. If you're given a certain axiomatic system modeling Turing machines, you might be able to prove that the halting problem for a particular Turing machine and input is undecidable within that system, but you're simultaneously proving that it doesn't halt. That's not a contradiction, because you have to reason outside the system to determine that fact. I suppose that a hyper-intelligent alien might be able to find a particular Turing machine humans that they can prove humans aren't smart enough to figure out, but I'd imagine they'd have their own halting problems that only a hyper-hyper-civilization could solve, etc. But if I say "I will never be able to decide whether this machine halts or not", I'm contradicting myself, because that sentence implies that the machine will halt. I suppose I could say something like "I will never be able to decide whether the machine halts in the next 10,000 years", but I don't think that's what you mean.

The second to last sentence should actually be: "that sentence implies that the machine will NOT halt".

Godel and this proof (which I'm seeing just for the first time now: my understanding is far from complete.) both seem to rely on self-reference as a mechanism for their results. Is there a way to try to set this up as: an oracle which is correct for all programs which do not refer to the oracle?

Or is my understanding so incomplete that that's a stupid question?

The problem is, what do you mean by "refer to the oracle"? You can always refer to the oracle indirectly.

Put it this way. Let's say the input to the Deceiver is expected to be a C program. So I have my oracle as a C program. But, it is not allowed to refer to the oracle, so I can't give it the C program. But, I

couldgive it a C program which executes Java code, and then pass a Java port of the Oracle as the input to the Deceiver. Or I could write a C program which executes x86 machine code, and the input to the Deceiver is that C program plus the compiled Oracle.And once I've gotten to that level, what is machine code but a string of numbers -- or, if you wish, one really honkin' big number? So now you are saying that the Oracle is not allowed to have certain

numbersin its input? And in fact, since there are in theory an infinite number of possible computer architectures on which the Oracle might be compiled, there is now an infinite amount of numbers that the Oracle cannot accept as input!I am ripping off my weak lay understanding of Godel's Incompleteness Theorem here. See, Bertrand Russel had been working on this multi-volume work called the Principia Mathematica that sought to define mathematics from the ground up to prevent any types of contradictions or incompleteness. It did this by having one rule: nothing can ever refer to itself. If you need to refer to something in the 1st level logic, you need to use a statement in the 2nd level logic, and so on. But Godel proved that it's impossible

in any systemto prevent it from referring to itself, because you can always encode the thing you are referring to as a number and then refer to itself anyway. Which really pissed off Russell, by the way 🙂And please, for those who have a deeper understanding of Godel's theorem, be kind to me. 🙂 I know I probably said a lot of things there that are "not quite right", but I think at least the gist is there, eh?

Is there a way to try to set this up as: an oracle which is correct for all programs which do not refer to the oracle?I don't think so. How you would write such a program (call it X) to

excludeoracles?If X has no way to distinguish between inputs from oracle programs and non-oracle programs, then there can be no rule preventing the output of an Oracle as an input to X, because X couldn't tell the difference, and the halting problem occurs. Agreed?

So, the only way to prevent the halting program from occurring is to have X refer to oracle programs specifically. I.e. have a command like "if the input comes from an oracle program, do abc instead." But referring to oracles directly breaks your rule.

Calling on other oracles shouldn't be an issue. We could "flatten" those calls anyway, embedding the oracle logic into the program we're testing. This is why I'm curious about the human mind stepping in as the oracle. We change the problem statement to ask whether a human could answer the halting question, and stipulate that the input program is not allowed to query the human for input.

As Mark answered above, this may still not change the situation. The problem I have with it is an intuitive one; I'm having trouble imagining a program that I couldn't answer the question for (admittedly, not a strong argument).

It might depend on what you mean by "a program that I couldn't answer the question for". How about a program like the following, halts if and only if Goldbach's conjecture is true:

begin

n = 4

while(is_expressible_as_the_sum_of_two_primes(n))

n += 2

halt

Can you answer the halting question for that?

Maybe we mean a more idealistic sense, where we imagine a human being given arbitrarily much time to study the program and the math, etc. So likely one could *eventually* prove/disprove the above, and produce a correct answer in finite time with high confidence of no errors.

How about then a program that systematically enumerates all theorems of ZFC (or any other complicated axiomatic system) and halts if it ever finds an inconsistency? If ZFC is indeed consistent, then I can't even begin to imagine how one could ever confidently decide this fact. Any proof that could be formalized in ZFC itself, basically all of standard mathematics, is impossible by Godel, and it seems iffy to rely on any more powerful systems because their consistency may be more in doubt than ZFC itself. And you can't just adopt the procedure of declaring an axiomatic system like ZFC consistent after some finite amount of time N of examining it, because, for example, for any particular N, one can construct axiomatic systems that are inconsistent, but the inconsistency is revealed only when you look substantially more than N steps deep, so that you will wrongly declare them consistent when they actually aren't.

The problem with querying the human is that if you are a materialist, then you believe there is

in theorya program that could replicate the behavior of the human (it does not matter if you think, as I am increasingly beginning to suspect, that such a prospect may turn out to be a permanent practical impossibility). And that program will have, as Mark mentions, an infinite number of possible Godel numbers. So you can't exclude it from querying the human.If you are a dualist, then I suppose all bets are off. 🙂

In any case, as I mentioned, there are plenty of practical reasons preventing a human from being a true halting oracle. Quick, tell me if this program halts, without executing it:

It takes a single integer as input. Does it halt if the input is 364?

I don't know if 'crashing' is the same as 'halting'. But seriously, if that's a real computer language, I'm glad I stopped at Java.

It's plain old C, just written very tersely and intentionally obfuscated. It solves the N queens problem. So the answer is: Yes, it does halt -- though for the input I asked about, the universe will likely end first.

No.

To do that is equivalent to being able to look at an arbitrary program, and determine whether or not it's an oracle. And programmatically recognizing whether or not a particular program

isan oracle is actually harder than determining whether or not that program would halt.I should probably write a post with the full formal version of it - which talks about Gödel numberings of programs, and the properties of the numberings. But basically, there's an infinite number of Gödel numbers which correspond to any particular computable function. So if there were an oracle, there'd be an infinite number of different programs that implemented that oracle. So you can't rely on recognizing the oracle by it's Gödel number. You need to recognize it by some kind of analysis or proof process. And that proof process requires solving the halting problem. so if you could create a program which identified oracles, then you'd already have a solution to the halting problem, and you wouldn't need to identify the oracle.

Douglas Hofstadter has a rather wonderful multi-chapter lay explanation of Godel's proof in

I am a Strange Loop, for anybody who is interested in it.Are you saying that it's impossible for a program to recognize whether another program is really oracle in disguise? Does that mean that the question of whether two Turing machines have the same output is computationally unsolvable? If so, that is highly counterintuitive.

Imagine these programs:

Program1(x) : return x + 1.

Program2(x) : Loop through all even numbers less than x until one is found that isn't the sum of two prime numbers. If one is found, return it. Otherwise, return x + 1.

Do these programs do the same thing ? Nobody knows.

If you can't solve the Halting Problem, then by definition you can't solve the problem of whether two Turing machines have the same output. You could never know if one of them was going to halt when the other wouldn't!

It is

harderto solve that question than to solve the halting problem! If you had a perfect halting oracle, youstillwould not be able to solve the equivalency problem.(note: there are many special cases that CAN be solved, and that is why the natural intuition is wrong - we can easily solve it for the kind of programs we're used to, and then we generalize that.)

I think people have the impression that, oh, sure, the supposed halting oracle isn't a true halting oracle, because if it's tricked into working on itself or another oracle, it can be tricked into giving wrong answers. But maybe you can make a halting almost-oracle that's plenty good enough for just about all practical applications, or good enough for all the vast number of problems that you don't accidentally or mischievously call a halting oracle recursively.

I'm guessing that that's not really possible, either, and the actual proof might have as a corollary that if you can't use a halting oracle to make sure that your halting oracle will halt, then you can't be sure that your halting oracle will halt for non-recursive applications, either. Or something?

If you have sufficient understanding of Godel's proof (and I don't profess to have anything beyond a shallow lay understanding myself, but it's sufficient for this purpose) then it becomes clear that the Deceiver is merely a schematic for an

infinitenumber of possible inputs which would cause any given halting oracle to fail.Which is not to say you couldn't construct heuristic partial solutions to the halting problem. But anything that tries to solve it directly will be "almost" in the sense that the set of integers "almost" covers the set of reals. Or something.

With apologies for the nitpick–

That program would

notwork as a real counterexample for a halting oracle until you corrected the function name "Deciever" to "Deceiver" (or changed the call to "Deceiver" to a call to "Deciever")!This return value is false. 😛

[...] This post was mentioned on Twitter by Aaron Kearns, ScientopiaBlogs. ScientopiaBlogs said: Turing Crackpottery! http://dlvr.it/4tB4P [...]

Geoffrey Pullum's Scooping the Loop Snooper sets this problem to something vaguely resembling verse.

I've found that it is infinitesimally close to impossible to teach someone who begins by not wanting to learn, and not trusting the teacher.

Terry Tao posted another great blog thread.

The “no self-defeating object” argument, revisited

18 October, 2010

http://terrytao.wordpress.com/2010/10/18/the-no-self-defeating-object-argument-revisited/

Despite the presence of these non-mathematical analogies, though, proofs by contradiction are still often viewed with suspicion and unease by many students of mathematics. Perhaps the quintessential example of this is the standard proof of Cantor’s theorem that the set {R} of real numbers is uncountable. This is about as short and as elegant a proof by contradiction as one can have without being utterly trivial, and despite this (or perhaps because of this) it seems to offend the reason of many people when they are first exposed to it, to an extent far greater than most other results in mathematics. (The only other two examples I know of that come close to doing this are the fact that the real number 0.999... is equal to 1, and the solution to the blue-eyed islanders puzzle.)

Why is it that the `in` variable is set but never used?

Late to the game, but I wanted to clear up a misconception.

I fully understand the halting problem. You just completely misunderstood what I was getting at. The deceiver program is ONE specific program. So is the enabler program. Now, the enabler program causes no contradiction. But it is identical in every other way to the deceiver. Yet it is still undecidable. You seem to agree from the following quote.

WOW! Just WOW! And I'm supposed to be the crackpot. This is pure lunacy.

If the Enabler uses the Oracle on itself, then the Oracle will be indeterminate and so will the Enabler. Both results from the Oracle are possible. That is the definition of indeterminate. No ONE particular answer is forced. You cannot state whether the Enabler will halt or not.

Then you object to this by saying:

This cannot be more wrong. If we know that a program is indeterminate, then the Oracle, even if it were to give a correct answer, means that the Oracle would be indeterminate as well. You say no, it HAS to give an answer. But this is my point. It would have to make up the answer from thin air. This means there is not enough information with what's given to the Oracle to force to one answer instead of the other. Get it?

The contradiction does NOT come from what you say here:

This is just silliness. The Enabler program proves that there isn't enough information passed to the Oracle. That is where the indeterminate result comes from. It does not come from the contradiction.

BTW, the halting problem is about definitions. It always has been. I'm not sure why you contest this. It's what mathematicians have always said about the halting problem. Sorry if you've not gotten the memo on this. But you're calling mathematicians crackpots.

All programs either halt or don't. So the halting problem is about knowing if we can define the halting state in a certain way (according to the description of the halting problem) that will give us an answer. Note that if you describe the halting problem in a different way, your contradiction goes away. For example, allow the Oracle to return four values. One extra value that indicates if the user of its return value will do the opposite, and the other value if if it does the same thing. What then? Of course it's all about definitions. You didn't seriously think this was REALLY about calculating the halting status regardless of the definitions used, did you?

HAHAHAHAHAHAHA!!!

For example, if you ask me if you're going to go left or right and whatever answer I give you, you will do the opposite and you tell me as much, I know exactly how you will behave. So I know your behaviour. But if my only allowed response is DEFINED as LEFT or RIGHT only, it is inadequate to the task... EVEN WHEN I KNOW YOUR BEHAVIOUR.

So it isn't about calculating the halting status, but rather if I can calculate the halting status according to one specific definition of the Oracle. IOW, the halting problem proof doesn't prove if the Oracle can calculate the halting status or not as you incorrectly claim. It proves if the Oracle's definition is possible within the set of all programs. A different definition can calculate the halting status.