Searle's Chinese Room: Another Nail

[updated 2/28/2024 - See "Addendum"]
[updated 3/9/2024 - note on Searle's statement that "programs are not machines"]

I have discussed Searle's "Chinese Room Argument" twice before:
here and here. It isn't necessary to review them. While both of them argue against Searle's conclusion, they aren't as complete as I think they could be. This is one more attempt to put another nail in the coffin, but the appeal of Searle's argument is so strong - even though it is manifestly wrong - that it may refuse to stay buried. The Addendum explains why.

Searle's paper, "Mind, Brains, and Programs" is
here. He argues that computers will never be able to understand language the way humans do for these reasons:
  1. Computers manipulate symbols.
  2. Symbol manipulation is insufficient for understanding the meaning behind the symbols being manipulated.
  3. Humans cannot communicate semantics via programming.
  4. Therefore, computers cannot understand symbols the way humans understand symbols.
In evaluating Searle's argument:
  1. Is certainly true. But Searle's argument ultimately fails because he only considers a subset of the kinds of symbol manipulation a computer (and a human brain) can do.
  2. Is partially true. This idea is also expressed as "syntax is insufficient for semantics." I remember, from over 50 years ago, when I started taking German in 10th grade. We quickly learned to say "good morning, how are you?" and to respond with "I'm fine. And you?" One morning, our teacher was standing outside the classroom door and asked each student as they entered, the German equivalent of "Good morning," followed by the student's name, "how are you?" Instead of answering from the dialog we had learned, I decided to ad lib, "Ice bin heiss." My teacher turned bright red from the neck up. Bless her heart, she took me aside and said, "No, Wilhem. What you should have said was, 'Es ist mir heiss'. To me it is hot. What you said was that you are experiencing increased libido." I had used a simple symbol substitution, "Ich" for "I", "bin" for "am", and "heiss" for "hot", temperature-wise. But, clearly, I didn't understand what I was saying. Right syntax, wrong semantics. Nevertheless, I do now understand the difference. What Searle fails to establish is how meaningless symbols acquire meaning. So he handicaps the computer. The human has meaning and substitution rules; Searle only allows the computer substitution rules.
  3. Is completely false.
Because 2 and 3 are false, his conclusion does not follow.

To understand why 2 is only partially true, we have to understand why 3 is false.
  1. A Turing-complete machine can simulate any other Turing machine. Two machines are Turing equivalent if each machine can simulate the other.
  2. The lambda calculus is Turing complete.
  3. A machine composed of NAND gates (a "computer" in the everyday sense) can be Turing complete.
    • A NAND gate (along with a NOR gate) is a "universal" logic gate.
    • Memory can also be constructed from NAND gates.
    • The equivalence of a NAND-based machine and the lambda calculus is demonstrated by instantiating the lambda calculus on a computer.1
  4. From 3, every computer program can be written as expressions in the lambda calculus; every computer program can be expressed as an arrangement of logic gates. We could, if we so desired, build a custom physical device for every computer program. But it is massively economically unfeasible to do so.
  5. Because every computer program has an equivalent arrangement of NAND gates2, a Turing-complete machine can simulate that program.
  6. NAND gates are building-blocks of behavior. So the syntax of every computer program represents behavior.
  7. Having established that computer programs communicate behavior, we can easily see what Searle's #2 is only partially true. Symbol substitution is one form of behavior. Semantics is another. Semantics is "this is that" behavior. This is the basic idea behind a dictionary. The brain associates visual, aural, temporal, and other sensory input and this is how we acquire meaning. Associating the visual input of a "dog", the sound "dog", the printed word "dog", the feel of a dog's fur, are how we learn what "dog" means. We have massive amounts of data that our brain associates to build meaning. We handicap our machines, first, by not typically giving them the ability to have the same experiences we do. We handicap them, second, by not giving them the vast range of associations that we have. Nevertheless, there are machines that demonstrate that they understand colors, shapes, locations, and words. When told to describe a scene, they can. When requested to "take the red block on the table and place it in the blue bowl on the floor", they can.
Therefore, Searle's #3 is false. Computer programs communicate behavior. Syntax rules are one set of behavior. Association of things, from which we get meaning, is another.

I was able to correct my behavior by establishing new associations: temperature and libido with German usage of "heiss". That associative behavior can be communicated to a machine. A machine, sensing a rise in temperature, could then inform an operator of its distress, "Es ist mir heiss!". Likely (at least, for now) lacking libido, it would not say, "Ich bin heiss."

Having shown that Searle's argument is based on a complete misunderstanding of computation, I wish to address selected statements in his paper.

Instantiating a computer program is never by itself a sufficient condition of intentionality.

This is patently wrong by simply looking at the construction of machines (and brains). A NAND gate, by construction, has intentionality: two inputs are combined and one output is selected based on the characteristic of the logic gate.
3 The intentionality of the program is the combined intentionality of the logic gates that represent the program. It's no different from brains. A neuron has multiple inputs which are combined into one output. The intentionality of the brain is the combined intentionality of the neural net. Remember, the wiring is the program! They are equivalent.

As regards the first claim, it seems to me quite obvious in the example that I do not understand a word of the Chinese stories.

Searle's thought experiment deliberately handicaps the computer. It gives the machine the behavior of substituting Chinese symbols for English symbols, but it does not give the computer the behavior of meaning, the
this is that associations, the mental dictionary, that we build up over a lifetime.

In the Chinese case, I have everything that artificial intelligence can put into me by way of a program.

The volumes I have on artificial intelligence in my library should be enough to show that this is false.

My car and my adding machine, on the other hand, understand nothing: they are not in that line of business.

Rightfully so. Our machines have the behaviors they need to do the job for which they are designed. It is economically disadvantageous to include more than that. Searle makes the leap from "we don't give them this ability" to "we can't give them this ability."

Whereas the English subsystem knows that "hamburgers" refers to hamburgers, the Chinese system knows only that "squiggle squiggle" is followed by "squoggle squoggle."

Actually, the English system doesn't know that "hamburgers" refers to "hamburgers". The English system knows the substitution rule "squiggle squiggle" for "hamburger". The English system could be given additional associations with the word "hamburger", but if that is done for English, it can be done for Chinese. Just like it is done for humans.

... a part that engages in meaningless symbol manipulation...."

All computation, whether in the brain or in computers, is meaningless symbol manipulation. It doesn't matter if the symbols are marks on paper, voltages in a logic circuit, or atoms in a brain. You can replace the "alphabet" of the lambda calculus with atoms in space-time - it's all the same. Take apart a brain or a computer and you won't find meaning. Meaning is found in the behavior of the system, in particular how the meaningless symbols are associated, which is mediated by the arrangement of the wiring, not in the individual parts.

And the mental-nonmental distinction cannot just be in the eye of the beholder...

Unfortunately for Searle, that's the way the world works. See "
The Inner Mind".

The first thing to notice about the robot reply is that it tacitly concedes that cognition is not solely a matter of formal symbol manipulation, since this reply adds a set of causal relation with the outside world.

It does no such thing. Interacting with the outside world is just more symbol manipulation. The sensors simply convert the properties of the outside world into the symbols (the lambda alphabet) used by the machine/brain. What it does do is give the machine a wider range of behavior so that it has a better chance of being seen as behavior we recognize as intelligent. The difference between a thermometer and a man isn't fundamentally the construction. It's the behavior! As Feynman said, "Computer theory has been developed to a point where it realizes that it doesn't make any difference; when you get to a universal computer, it doesn't matter how it's manufactured, how it's actually made." Searle admits this when he writes, "If we could build a robot whose behavior was indistinguishable from human behavior over a large range, we would attribute intentionality to it." But then Searle says that if we knew the robot's behavior was caused by a "formal program", "we would not attribute intentionality to it." This is because Searle doesn't understand the capabilities of "formal programs." But the arrangement of the wiring of our neural net is no different than the arrangement of the wiring in machines, and the wiring is equivalent to the "formal program."

By its own definition, it [AI] is about programs, and programs are not machines. [emphasis mine].

This is the fundamental mistake that Searle repeatedly makes. Every program is equivalent to a machine. Every program can be written in the Lambda calculus; every program can be "written" as an arrangement of logic gates (including neurons), every program is an instance of a (subset of a) Turing machine. This is a consequence of Turing equivalence.

Addendum

The first objection to this takedown of Searle, from a certain alleged philosopher on Twitter
4, was that the idea that meaning is this is that behavior is too naive because it doesn't take a priori knowledge into account. The response to this is that the knowledge is contained in the arrangement of the logic elements in the system. Computers have hard-wired behavior so that they can bootstrap when the power is turned on.

The second objection was more interesting: that
this is that behavior does not impart understanding the way humans understand things. With this objection, Searle's Chinese Room algorithm of symbol substitution becomes irrelevant. In this view, no algorithm can implement human mental behavior. Trying to understand how the philosopher arrived at this conclusion took a bit of detective work.

The first insight into the philosopher's mind was that consciousness is inferred from behavior. On this we both agree. Let us represent the inference process by the function F() (for inference from "folk" psychology, where "folk" is ambiguously defined). The philosopher then affirmed that F(B) did not equal F(B). That is, the inference to consciousness of behavior B when performed by a human (the first F(B)) was not the same inference when the exact same behavior was performed by a machine (the second F(B)). Assuming that the philosopher wasn't simply logically inconsistent, the difference in inference needed explanation. Here is where "folk" becomes important. It is clear to me (but perhaps not to the philosopher), that the philosopher was taking both the external behavior BE
and the internal behavior BI as inputs to F(). Then, F(BE, BIh) != F(BE, BIm) means that the internal behavior of the human, BIh, is not the same as the internal behavior of the machine, BIm.

But we don't know what BI
h is. It's a question mark. You can assert that it's not BIm, but the philosopher wants more than assertion. So the philosopher brings in the idea of "multiple realizability," that is, that mental states can be mapped to different physical states. On this, the philosopher and the engineer agree.5

So now the philosopher asserts that F(BE, ∞) != F(BE, BI
m). The problem is that Turing equivalence says that all of those internal behaviors implement the same external behavior. If you have one, you have all of them. This is why the Turing Test is a blind test so that only external behavior is considered.

So what Searle is really trying to argue, albeit unsuccessfully, is that human minds are not Turing equivalent. But the Church-Turing Hypothesis is just that - a hypothesis. Furthermore, it's a hypothesis that cannot be proven or disproven simply by thinking about it. So Searle's proof is wrong. But whether or not the Church-Turing hypothesis is true or not remains open, and will remain open until a counter-example to it can be demonstrated. And that's the only way to prove that minds aren't Turing machines.



[1] It is sufficient to show the equivalence of every computer program to a program using the lambda calculus. But that doesn't make the connection to hardware explicit and it's important to stress the equivalence between computer programs and the hardware on which they run.
[2] Note that, for economic reasons, we translate computer programs into numbers in much the same way that G
ödel translated arithmetic statements into numbers. The computer manufacturer establishes a set of numbers with a set of behaviors implemented by arrangements of logic gates. The compiler, which translates a computer program into the "machine language" of a particular machine, converts the symbols in the program to the equivalent numbers understood by the target machine.
[3] See "
The Physical Ground of Logic".
[4] It's hard to know whether to take someone at face value on
X. I give him the benefit of the doubt.
[5] "
The Physical Ground of Logic"






blog comments powered by Disqus