The Halting Problem and Human Behavior

The "halting problem" considers whether or not it is possible to write a function that takes as input a function Pn and the inputs to Pn, Pin, and provides as output whether or not Pn will halt.

It is obvious that the function
(defun P1 ()
"Hello, World!")
will return the string "Hello, World!" and halt.

It is also obvious that the function
(defun P2 ()
(P2))
never halts. Or is it obvious? Depending on how the program is translated, it might eventually run out of stack space and crash. But, for ease of exposition, we'll ignore this kind of detail, and put
P2 in the "won't halt" column.

What about this function?
(defun P3 (n)
(when (> n 1)
(if (evenp n)
(P3 (/ n 2))
(P3 (+ (* 3 n) 1)))))
Whether or not, for all N greater than 1, this sequence converges to 1 is an unsolved problem in mathematics (see
The Collatz Conjecture). It's trivial to test the sequence for many values of N. We know that it converges to 1 for N up to 1,000,000,000 (actually, higher, but one billion is a nice number). So part of the test for whether or not P3 halts might be:
(defun halt? (Pn Pin)

(if (and (is-program Pn collatz-function) (<= 1 Pin 1000000000))
t
…)
But what about values greater than one billion? We can't run a test case because it might not stop and so
halt? would never return.

We can show that a general algorithm to determine whether or not any arbitrary function halts does not exist using an easy proof.

Suppose that
halt? exists. Now create this function:
(defun snafu ()
(if (halt? snafu nil)
(snafu)))
If
halt? says that snafu halts, then snafu will loop forever. If halt? says that snafu will loop, snafu will halt. This shows that the function halt? can't exist when knowledge is symmetrical.

As discussed
here, Douglas Hofstadter, in Gödel, Escher, Bach, wrote:

It is an inherent property of intelligence that it can jump out of the task which it is performing, and survey what it is done; it is always looking for, and often finding, patterns. (pg. 37)

Over 400 pages later, he repeats this idea:

This drive to jump out of the system is a pervasive one, and lies behind all progress and art, music, and other human endeavors. It also lies behind such trivial undertakings as the making of radio and television commercials. (pg. 478).

This behavior can be seen in looking at the halting problem. After all, one is tempted to say, "Wait a minute. What if I take the environment in which
halt? is called into account? halt? could say, 'when I'm analyzing a program and I see it trying to use me to change the outcome of my prediction, I'll return that the program will halt, but when I'm running as a part of snafu, I'll return true. That way, when snafu is running, it will then halt and so the analysis will agree with the execution.' We have "jumped out of the system" and made use of information not available to snafu, and solved the problem.

Except that we haven't. The moment we formally extend the definition of
halt? to include the environment, then snafu can make use of it to thwart halt?
(defun snafu-extended ()
(if (halt? snafu-extended nil 'running)
(snafu-extended)))
We can say that our brains view
halt? and snafu as two systems that compete against each other: halt? to determine the behavior of snafu and snafu to thwart halt?. If halt? can gain information about snafu, that snafu does not know, then halt? can get the upper hand. But if snafu knows what halt? knows, snafu can get the upper hand. At what point do we say, "this is madness?" and attempt to cooperate with each other?

I am reminded of the words of St. Paul:

Knowledge puffs up, but love builds up. — 1 Cor 8:1b



blog comments powered by Disqus