This is an archive of taken from the Wayback Machine:

From (Ashwin Ram)
Subject: Class notes - CAN MACHINES THINK
Date: Sun, 6 Jun 1993 20:17:36 GMT



Basic question: Can computers be intelligent? Can computers think?

Note: This is a different question from

``Can machines think?''
``Are humans machines?''
``Can machines think like humans?''
``How would we know whether a computer is thinking?

Keep in mind that it is not demeaning to humans to say that computers
can think or that humans are (physical/biological) machines. Humans
are certainly (very complex) physical/biological devices, but that
does not demean us in any way. If it should turn out that our
intelligence was indeed the result of a (very complex) program, that
would not demean us either.


Related question: What is intelligence? What is thinking?

1. A biochemical activity. (Does a neuron think?)
2. A neurobiological activity activity. (How many neurons are necessary?)
3. A non-computational "physical" activity. (What is it?)
4. A non-physical "mental" activity. (How could we ever know this?)
5. A behavioral activity. (Is I/O behavior sufficient?)
6. A "functional" activity. (The AI view: goals, plans, reasons, hypotheses,
explanations, memory. Self-reference? Judgement? Emotion?)


Can computers think?

Some common answers:

1. No (dualist/mystic): Computers lack "mental stuff". They don't have
intuitions, feelings, phenomenology. (Soul?)

2. No (neurophysiology critical): Even if their behavior is arbitrary close,
biology is essential.

3. No (beyond our capabilities): Not impossible in practice, but too complex.
There are limits on our self-knowledge which will prevent us from creating
a thinking computer.

4. Yes (but not in our lifetimes): Too complex. Practical obstacles may be
insurmountable. But with better science and technology, maybe.

5. Yes (functionalist): Programs/computers will become smarter without a
clear limit. For all practical purposes, they will 'think' because they
will perform the FUNCTIONS of thinking. (The standard AI answer.)

6. Yes (extreme functionalist; back to mystic): Computers already think.
All matter exhibits mind in different aspects and degrees.

7. Others?


Clocks vs. Computers vs. People (Winograd and Flores)


1. Apparent autonomy: little lot
2. Complexity of purpose (not design): simple several purposes
3. Structural plasticity: constant changing
4. Unpredictability: little lot

Computers are tending more and more towards "minds".


Alan Turing anticipated the following objections (as discussed by

1. Theological objection: Thinking is a function of man's immortal soul.
2. Heads In The Sand objection: The consequences of machines thinking would
be too dreadful.
3. Mathematical objection (Lucas's argument): [See below.]
4. Argument from Consciousness: Machines will never be self-conscious.
5. Disability argument: It is impossible to make a machine do X, where X = be
friendly, fall in love, have a sense of humor, do something really new, ...
6. Lady Lovelace's objection: "[Babbage's] Analytical Engine has no
pretensions to _originate_ anything. It can do _whatever we know how to
order it_ to perform."
7. Informality of behavior: Since there are no rules of conduct running men's
lives, men aren't machines.
8. Extra-Sensory Perception argument: You could tell a machine from a man
with ESP, because a machine could at best guess randomly.


Some Serious Arguments Against AI:

Mathematical arguments
Biological arguments
Simulation arguments
Homunculus arguments
Creativity/predictability arguments
Artifact arguments
Symbol grounding arguments


Mathematical arguments (The Lucas Argument, as discussed by Hofstadter
in Goedel, Escher and Bach; also discussed in the The Abilities of Men
and Machines paper by Dennett):

All machines are Turing Machines (universal computers), but humans are not;
they are more than Turing Machines.

Because: Suppose Jones (a particular human) was a realization of a Turing
Machine T. Then by Goedel's theorem, there is something A that he can't do
(namely, prove T's Goedel sentence). But since Jones can do A, he can't be a
realization of a Turing Machine.


1. Dennett: Provable limitations aren't limitations on what can be done
heuristically with high reliability. Analogy:

All computer programs are algorithms. (true)
There is no feasible algorithm for checkmate in chess. (true)
Therefore checkmate by computer is impossible. (false)

2. Hofstadter: There are things humans can't do too. Can we really
Goedelize? We know it can be done, but can Jones do it?

3. Hofstadter: Goedel's argument applies to lower levels of AI programs,
which are simple, formal, logical systems. But higher levels are models
of the mind, and can be informal just like the mind is. This is where the
intelligence lies. So human-like intelligence could emerge out of formal
lower levels.

Machines with multiple levels of knowledge, including "strange loops" in
which a level of description applies to itself, are fundamentally
different to what we normally think of as "machines" (which have no


Simulation arguments:

A simulation is not the real thing. You wouldn't expect to get wet in the
presense of a simulation of a hurricane. Similarly, although a simulation of
intelligence would tell you a lot, it would not really _be_ intelligence.


1. Hurricanes and intelligence are fundamentally different. A simulation of
intelligence would in fact display intelligence.

2. Dennett, in the Why You Can't Make A Computer That Feels Pain paper: Does
a simulation need to be indistinguishable from the real thing? The
hurricane simulation would give you good descriptions of hurricanes, and
predictions about a hurricane's behavior. So also in AI we seek a theory
of intelligence. We are not trying to prove that humans are computers
(any more than hurricanes are), but rather trying for a rigorous theory of
human psychology. The program _instantiates_ the theory.


Homunculus arguments (Hume's problem, as discussed by Dennett in the AI As
Philosophy And As Psychology paper):

We can't account for perception unless we assume it gives us an internal
image or model of the world. But what use is this image unless we have an
inner eye to perceive it? And then how do we explain its capacity for

Analogous homunculus problem in AI: The only psychology that could
explain intelligence must posit internal representations -- ideas,
sensations, impressions, maps, schemas, propositions, neural signals,
whatever (radical behaviorism excluded). But nothing is intrinsically
a representation of anything; it is a representation _for_ or _to_
someone. Any system of representations must have a user or
interpreter external to it. This interpreter must have psychological
abilibities; it must understand the representations; it must have
beliefs and goals. But this is a homunculus. Therefore psychological
without homunculi is impossible.

Refutation: AI has solved this problem through reductionism. We
reduce intelligence to smaller and stupider "homunculi", until
ultimately they begin to look less like homunculi and more like
machines. In other words, the internal "modules" of intelligence need
not be full blown intelligences in themselves, so they can ultimately
be reduced to machines.


Creativity and predictability arguments: Computers can only do what
they are programmed to do. They are not intelligent; they are merely
following the instructions of a program.


1. Evans: The same is true of animals and men; they are "structure

Humans may well be the same way. If you could model humans down to the
level of physics (the program will probably need more memory than there
are atoms in the universe, but ignore that for a moment), they might be
predictable too since ultimately they are following the laws of physics.
But that apart, if we could understand the human "program", we might make
the same objection to human intelligence.

Computers are predictable only in theory (just like humans); once we
succeed in building an intelligent computer, it will be large and complex
enough that it will for all practical purposes be as unpredictable and as
non-instruction-following as humans are.

Note: it is not important to this argument who wrote the program, whether
it evolved through biology, or whether it was written by humans and/or
other computers.

2. Computers that learn from their experiences will not only be doing what
they are programmed to do; they will be able to go beyond their initial
programs. And since their experiences will not be deterministically known
in advance (and, for any given machine, you may not know its past
experiences), they will not be predictable.


Artifact arguments: Computers are artifacts; we created them, so they
can't be intelligent.

Refutation: We create babies too, yet we are willing to grant
intelligence to babies.


Searle's Chinese Room:


1. McCarthy: "Searle confuses the mental qualities of one computational
process, himself for example, with those of another process that the first
process might be interpreting, a process that understands Chinese, for

It is the program that the computer is executing that understands Chinese.
Roughly, your neurons or your brain don't understand English, but rather
the program they are executing (your "mind") does. Similarly, Searle
doesn't understand Chinese, but the Chinese Room does by virtue of its

McDermott: There are two understanders here, Searle and Searle simulating
the Chinese understander. It is the latter that understands Chinese.

2. Hayes: The basic flaw in Searle's argument is a widely accepted
misunderstanding about the nature of computers and computation: the idea
that a computer is a mechanical slave that obeys orders. This popular
metaphor suggests a major division between physical, causal hardware which
acts, and formal symbolic software, which gets read. This distinction runs
through much computing terminology, but one of the main conceptual
insights of computer science is that it is of little real scientific
importance. Computers running programs just aren't like the Chinese room.

Software is a series of patterns which, when placed in the proper places
inside the machine, cause it to become a causally different device.
Computer hardware is by itself an incomplete specification of a machine,
which is completed - i.e. caused to quickly reshape its electronic
functionality - by having electrical patterns moved within it. The
hardware and the patterns together become a mechanism which behaves in the
way specified by the program.

This is not at all like the relationship between a reader obeying some
instructions or following some rules. Unless, that is, he has somehow
absorbed these instructions so completely that they have become part of
him, become one of his skills. The man in Searle's room who has done this
to his program now understands Chinese.

3. If the computer has _real_ experiences (it interacts with the world), if
it can connect the Chinese characters it is manipulating with these
experiences, if its symbols are grounded in the real world, then it _is_
understanding Chinese. Searle's Chinese Room needs sensory and motor


Harnad's Symbol Grounding Problem:


1. Powers: These results suggest three possible resolutions of the symbol
grounding problem: the symbol/non-symbol distinction is not meaningful;
neural networks can exhibit 'symbolic' behaviour and structure; and, a
sensory-motor environment can provide grounding.


Finally, a moral/ethical issue:

Let suppose that it is possible to build intelligent computers.
Should we?

Would we leave critical decisions about (human) life, liberty, and
happiness to intelligent computers?
Would intelligent computers be used by some for ``evil'' purposes?
Would intelligent computers take over the world?

What, if any, is the responsibility of AI researchers?
[Some in AI have claimed that their loyalty is to intelligence,
not to humanity.]

What should WE as students of AI DO?


Copyright (c) Ashwin Ram, 1990-93
Assistant Professor, College of Computing
Georgia Institute of Technology, Atlanta, Georgia 30332-0280
E-mail: <>