SEE ELSEWHERE: The Chinese Room entry by Searle in the MIT Encyclopedia
of Cognitive Science
The Ur Article
Searle, J. 1980a. "Minds, brains, and programs"
and Brain Sciences
AI" - the claim that "the appropriately programmed computer
really is a mind in the sense that computers given
the right programs can be literally said to understand and have other cognitive states" (p.
417) - Searle imagines himself locked in a room, receiving (as input)
Chinese writing (stories and questions); he processes the writing
by following a set of written instructions in English specifying
a Natural Language Understanding (NLU) program for Chinese modeled
& Abelson 1977's Script
Applier Mechanism (SAM); and he produces (as output) Chinese writing
(answers to the questions) "indistinguishable from . . . native
Chinese speakers" (p. 418). In the room, Searle is a
Turing (1950) test passing human computer (a la Turing 1937);
yet he doesn't understand a word of Chinese. So, neither would
an electronic computer running SAM or any other NLU program.
"The computer has nothing more than I have in the case where
I understand nothing" (p. 418): it attaches no meaning (semantics)
to the physical symbols (syntax) it processes and hence has no genuine
mental states. This result is said to generalize to "any
Turing machine simulation of human mental phenomena" (p. 417).
several would-be rejoinders to the experiment. The systems reply says
the thinker (in the scenario) isn't Searle, it's the whole Searle-in-the-room
system. Searle responds by imagining himself to "internalize
all the elements of the system" by memorizing the instructions,
etc.: "all the same," he intuits, he "understands
nothing of the Chinese" and "neither does the system"
(p. 419). The
would add "a set of causal relation[s]" between the symbols
and "the outside world" by putting the computer in a robot.
Searle replies that "the addition of such 'perceptual' and
'motor' capacities adds nothing in the way of understanding":
imagining himself in the room in the robot, computationally acting
as "the robot's homunculus," still, he intuits, "by
instantiating the program I have no intentional states of the relevant
type" (p. 420). The brain simulator reply envisages
a program that "simulates the actual sequence of neuron firings
in the brain of a native Chinese speaker" (p. 420). Searle
replies, "even getting close to the operation of the brain
is still not sufficient to produce understanding" (p. 421). The combination
imagines all of the above and Searle replies - in effect - three
times nil is nil. The
other minds reply insists
"if you are going to attribute cognition to other people"
on the basis of their behavior "you must
in principle also attribute it to computers": Searle dismisses
this as an epistemological worry beside his metaphysical point.
"The problem in this discussion," he says, "is not
about how I know that other people have cognitive states. But rather
what it is that I am attributing to them when I attribute cognitive
states" and "it couldn't be just computational processes
and their outputs because the computational processes and their
outputs can exist without the cognitive state" (p.421-422).
many mansions reply
- that we can imagine would-be AI-crafters succeeding by supplemental
(or wholly other) noncomputational means, if computational means
don't suffice - Searle retorts, this "trivializes the project
of strong AI by redefining it as whatever artificially produces
and explains cognition" (422). In conclusion, Searle
advances his own thought that the brain must produce intentionality
by some noncomputational means which are "as
likely to be as causally dependent on . . . specific biochemistry
. . . as lactation, photosynthesis, or any other biological phenomenon"
Searle, J., 1980b, Intrinsic intentionality, Behavioral and Brain Sciences 3:450-456.
In this companion
piece Searle rebuts objections targeting the Ur (1980a)
article raised in the accompanying Open Peer Commentary.
This, he observes, requires him to "make fully explicit some
of the points that were implicit in the target article" and
"involve recurring themes in the commentaries" (p. 450).
As Searle explains it, "the point of the Chinese room example"
was to show that "instantiating a program could not be constitutive
of intentionality, because it would be possible for the agent to
instantiate the program and still not have the right
kind of intentionality"
(p. 450-451: my emphasis);
Cases of "intrinsic
intentionality are cases
of actual mental states." Assertions that computers "decide"
or "represent" things, by contrast, are just "observer relative ascriptions of intentionality, which are ways that people have of talking
about entities figuring in our activity but lacking intrinsic intentionality"
(p. 451), like words, sentences, and thermostats. Much opposition
to the Chinese room argument rests "on the failure to appreciate
this distinction" (p.452). The difference between intrinsic
and observer-relative concerns awareness. "Who
does the interpreting?" (p. 454), that is the question; a question
dictating the methodological imperative "in these discussions
[to] always insist upon the first person point of view" (p.
451). (Cf., Hauser
forthcoming; Hauser 1993.)
(by Block, Dennett,
Pylyshyn, and Wilensky)
"that the argument is just based on intuitions of mine"
Searle insists that intuitions "in the deprecatory sense have
nothing to do with the argument" (p. 451): in the room, it
is a plain "fact about me that I don't understand Chinese":
from "the first person point of view" there can be no
doubt. Curiously, Searle holds it to be likewise indubitable
"that my thermostat lacks beliefs": professed doubts (of
Marshall and McCarthy)
on this score he attributes to "confusing observer-relative
ascriptions of intentionality with ascriptions of intrinsic intentionality"
(p. 452). The curiosity is that the first person point of
view seems here, inaccessible (cf., Nagel 1974).
Block, and Marshall's
suggestions that psychology might "assimilate intrinsic intentionality"
under a "more general explanatory apparatus" that "enables
us to place thermostats and people on a single continuum" Searle
insists "this would not alter the fact that under our present
concept of belief, people literally have beliefs and thermostats
don't" (p. 452). As for those - like Dennett
and Fodor - who "take me to task because I don't
explain how the brain works to produce intentionality," Searle
replies, "no one else does either, but that it produces mental phenomena and that the
internal operations of the brain are causally sufficient for the
phenomena is fairly evident from what we do know": we know,
for instance, that light "reflected from a tree in the form
of photons strikes my optical apparatus" which "sets up
a series of neuron firings" activating "neurons in the
visual cortex" which "causes a visual experience, and
the visual experience has intentionality" (p. 452: cf., Explanatory Gaps).
The objection "that Schank's program is just not good enough,
but newer and better programs will defeat my objection" (Block,
& Croucher, Dennett,
Schank) "misses the point" which "should
hold against any program at all, qua formal computer program";
and "even if the formal tokens in the program have some causal
connection to their alleged referents in the real world, as long
as the agent has no way of knowing that, it adds no intentionality
whatever to the formal tokens," and this applies (contra Fodor)
whatever "kind of causal linkage" is supposed (p. 454).
Haugeland's demon assisted brain's neurons "still
have the right causal powers: they just need some help from the
demon" (p. 452); the "semantic activity" of which
Haugeland speaks "is still observer-relative and hence not
sufficient for intentionality" (p. 453).
Contrary to Rorty,
Searle protests his view "does not give the mental a `numinous
Cartesian glow,' it just implies that mental processes are as real
as any other biological processes" (p. 452). Hofstadter
is similarly mistaken: Searle advises him to "read Eccles
who correctly perceives my rejection of dualism" (p.. 454):
"I argue against strong AI" Searle explains, "from
interactionist position" (my emphasis: cf., Searle 1992 ch. 1).
Searle thanks Danto, Libet, Maxwell,
Puccetti, and Natsoulas
for adding "supporting arguments and commentary to the main
thesis." He responds to Natsoulas' and Maxwell's "challenge
to provide some answers to questions about the relevance of the
discussion to the traditional ontological and mind-body issues"
as follows: "the brain operates causally both at the level
of the neurons and at the level of the mental states, in the same
sense that the tire operates causally both at the level of the particles
and at the level of its overall properties" (p. 455).
(Note that the properties of tires in question - "elasticity
and puncture resistance" - being dispositions
of matter arranged as in
"an inflated car tire" are materialistically unproblematic.
But mental states according to Searle are not
dispositions (as behaviorism
and functionalism maintain) but something else; not "a fluid"
(p. 451) he assures us; something made of qualia and partaking
of ontological subjectivity (cf., Searle 1992),
but without the numinous Cartesian glow.)
Wilensky, Searle complains, "seems to think
that it is an objection that other sorts of mental states besides
intentional ones could have been made the subject of the argument,"
but, Searle says, "I quite agree. I could have made the
argument about pains, tickles, and anxiety," he continues,
but "I prefer to attack strong AI on what its proponents take
to be their strongest ground" (p. 453). (Note this response
implicitly concedes that the experiment is
rather than about semantics" (Searle 1999):
pains, tickles, and anxiety have no semantics, yet the experiment,
Searle allows, is none the worse for it .) The remaining commentators
Smythe, Ringle, Menzel, and Walter),
in Searle's estimation, "missed the point or concentrated on
peripheral issues" (p. 455): the point, he avers, is that "there
is no reason to suppose that instantiating a formal program in the
way a computer does is any reason at
all for ascribing intentionality
to it" (p. 454).
Searle, J., 1984a, Minds,
Brains, and Science, Cambridge: Harvard University
Chapter two is titled
"Can Machines Think?" (n.b.).
After initially summarizing the view being opposed - "strong
AI" - as the view "that the mind is to the brain as the
program is to the computer" (Computationalism)
Searle proceeds to advertise the Chinese room as "a decisive
refutation" of claims such as Herbert Simon's claim that "we
already have machines that can literally think" (AI Proper:
1997a) ). The argument
that follows reprises the thought experiment and several of the
replies to objections from Searle 1980a
with a notable addition . . . a "derivation from axioms"
1989a) supposed to capture
the argument's "very simple logical structure so you can see
whether it is valid or invalid" (p. 38). The derivation
proceeds from the following premises (p. 39):
1. Brains cause minds.
is not sufficient for semantics.
Computer programs are
entirely defined by their formal, or syntactical, structure.
have mental contents; specifically they have semantic contents.
to the following conclusions (pp.
No computer program by
itself is sufficient to give a system a mind. Programs,
in short, are not minds and they are not by themselves sufficient
for having minds.
2. The way that
the brain functions to cause minds cannot be solely in virtue
of running a computer program.
3. Anything else
that caused minds would have to have causal powers at least
equivalent to those of the brain.
4. For any artefact
that we might build which had mental states equivalent to human
mental states, the implementation of a computer program would
not by itself be sufficient. Rather the artefact would
have to have powers equivalent to the powers of the human brain.
The stated "upshot" (p.
41) is that Searle's own "monist interactionist" (Searle
1980b, p. 454) hypothesis of "biological
naturalism" (Searle 1983, p. 230) - "namely, mental states
are biological phenomena" (p. 41) - is confirmed. (cf.,
Searle, J. (1990a), "Is the brain's mind a
computer program?", Scientific American 262(1):26-31.
Searle again rehearses
his 1980a thought experiment here as "a decisive
refutation" of the computational theories of mind, or "strong
AI," and restates the derivation from axioms with minor variations
1984a). He then proceeds
to address the Connectionist Reply and the Luminous Room Counterexample,
both posed by Paul and Patricia Churchland in a companion (1990)
article. The Connectionist Reply has it that Searle-in-the-room's
lack of understanding is due to the system's serial computational
architecture. The experiment, consequently, fails to show that symbol
processing by a more brainlike parallel or connectionist system
would similarly lack semantics and similarly fail to understand.
Searle replies that the insufficiency of connectionism is easily
shown by a "Chinese gym" variation on the original thought
experiment. Imagine that a gym full of "monolingual English-speaking
men" implements a connectionist architecture conferring the
same Chinese language processing abilities envisaged in the original
experiment. Still, "No one in the gym speaks a word of Chinese,
and there is no way for the gym as a whole to learn the meanings
of any Chinese words" (p. 28). The Luminous Room Counterexample
presents an absurd "refutation" of Maxwell's electromagnetic
wave theory light: a man in a dark room causes electromagnetic waves
by waving around a bar magnet; he concludes from the failure of
the waving to illuminate the room that electromagnetic waves are neither constitutive of nor sufficient
for light. The Chinese room
example, according to the Churchlands, is completely analogous and
equally ineffectual as a "refutation" of the computational
theory of mind. Searle disputes the analogy. It breaks
down, he claims, "because syntax [purely formally] construed
has no physical powers and hence no physical, causal powers"
such that it might be possibly be "giving off consciousness"
(p. 31) at an undetectably low level as with the light (cf., Searle 1999).
J. (1992), The Rediscovery of the Mind.
Cambridge, MA: MIT Press.
Though the Chinese
room argument is not itself prominently featured in it, this work
can be viewed as an attempt to shore up the foundations on which
that argument rests, and to nurture background assumptions (e.g.,
the Connection Principle) and supplementary contentions (e.g., the
observer relativity of syntax) which encourage Searle's wonted "intuition"
about the room.
1 -countering criticism that the Chinese room argument depends on
dubious dualistic assumptions (Hauser 1993a,
chap. 6; forthcoming) or requires us to "regress to the
Cartesian vantage point" (Dennett 1987,
p. 336) - defends Searle's claim to "give a coherent account
of the facts about the mind without endorsing any of the discredited
Cartesian apparatus" (Searle 1992: 14). He then deploys
- by my count - at least five Cartesian devices in developing his
own "biological naturalist" account in the pages immediately
following. He affirms (1) the essential ontological
subjectivity of mental phenomena
("the actual ontology of mental states is a first-person ontology"
(p. 16)) and its correlative "Connection Principle" that
"[b]eliefs, desires, etc. . . . are always potentially conscious"
(p. 17); (2) a distinction "between something really having
a mind, such as a human being, and something behaving as if
it had has a mind, such as a computer" (p. 16), a distinction
Descartes deploys to deny nonhuman animals any genuine mentality
which Searle redeploys (with similar intent) against computers;
(3) a methodological principle of privileged access according to
which "the first-person point of view is primary" (p.
20); (4) a distinction between primary ("intrinsic")
and secondary ("observer relative") properties; and, perhaps
most notably, (5) a Cartesian ego, i.e., "a `first person'
an `I,' that has these mental states" (p. 20). He even
dots the "`I'" (cf., Descartes 1642,
Meditation 2), and appropriately so, for Searle's "`I'"
is no more identifiable with body or brain than Descartes'.
Not every property had (e.g., grayness), nor every event undergone
(e.g., a hemorrhage), nor even every biological function performed
by a brain (e.g., cooling the blood) is mental:
but being had by a subject is supposed to constitute the mental
as mental. Nor would it avail - it would be circular, here
- to say, "thoughts are subjective
properties of brains":
it is precisely in order to explicate what it is for a property
subjective that Searle introduces "a `first person' an `I'
that has these mental states" in the first place.
Given his acceptance of all of this, it's hard then to see what
it is about Cartesian dualism - besides the name - that Searle thinks
Chapter 3 is notable
for acknowledging, finally, "the three hundred years
of discussion of the `other minds problem'" about which Searle
had hitherto - in his original (1980a)
presentation and subsequent
of the other
- feigned amnesia. Searle's proposed "solution"
to this problem, however, is not new but, essentially, a reworking of the well
worn argument from analogy. Neither is it improved.
The analogical argument
in its original form - wherein behavioral effects are held
to provide independent confirmation of the hypothesis suggested by physiological
resemblance (c.f., Mill 1889, p. 204-205n) - is generally thought
too weak to ward off the solipsism "implicit in . . . any theory
of knowledge which adopts the Cartesian egocentric approach as its
basic frame of reference" (Thornton 1996).
Yet, Searle's "solution" is to weaken the argument
further by discounting the evidentiary import of behavior.
In so doing Searle regresses in this connection not only to
Cartesianism, but beyond it, employing stronger as-ifness apparatus
to exclude computers from the ranks of thinking things than Descartes
employs to exclude infrahuman animals. (C.f., Harnad 1991,
Chapter 7 elaborates
and defends what Searle calls the "Connection Principle":
"The notion of an unconscious
mental state implies accessibility to consciousness" (p. 152: c.f., Searle 1990f).
As the credulity of Harnad (1991)
and Bringsjord (1992) attest , such inviolable linkage of mentality
to consciousness facilitates acceptance of Searle's example:
if the argument is to be "about semantics" (Searle 1999,
p. 128) and thought in general - not just consciousness thereof/therein
- the possibility of unconscious understanding must be foreclosed.
Enter "the Connection
Principle" (p. 162).
Chapter 9 (p. 208)
reprises the Wordstar-on-the-wall argument of Searle 1990c
(p.27) in pursuit of the supplemental stratagem (c.f., Searle 1999)
of maintaining that "syntax
is essentially an observer-relative notion. The multiple realizability
of computationally equivalent processes in different physical media
is not just a sign that the processes are abstract, but that they
are not intrinsic to the system at all. They depend on interpretation
from the outside" (p.
209: original italics). This buttresses the Chinese room argument
against the rejoinder that, while the argument (as characterized
by Searle) merely "reminds" us of the "conceptual
truth that we knew all along" (Searle 1988,
p. 214) that syntax alone doesn't suffice for semantics by definition
or in principle, whether implemented
syntax or computational
processes suffice for semantics
or in fact (what is chiefly at issue here) is
an empirical question. The Chinese room experiment, the rejoinder
continues, is ill equipped to answer this empirical question due
to the dualistic methodological bias introduced by Searle's tender
of overriding epistemic privileges to the first-person. Furthermore,
Searle's would-be thought experimental evidence that computation
doesn't suffice for meaning (provided by Searle-in-the-room's imagined
lack of introspective awareness of the meaning) is controverted
by real experimental evidence (provided by the actual intelligent-seeming-behavior
of programmed computers) that it does (c.f., Hauser forthcoming).
However, if syntax and computation "exist only relative to
observers and interpreters," as Searle insists, arguably, empirical
claims of causal-computational
sufficiency are "nonstarters"
1999, p. 176) and the possibility
that implemented syntax causally suffices for thought (or anything)
Searle, J. (1994), "Searle, John R."
in A Companion to the Philosophy of Mind, ed. S. Guttenplan, Basil Blackwell Ltd., Oxford,
strenuously disavows his previously advertised claim to have "demonstrated
the falsity" of the claim "computers . . . literally have
thought processes" (Searle et al. 1984, p. 146) by the Chinese room argument. He here
styles it "a misstatement" to "suppose that [the
Chinese room] proves that 'computers cannot think'" (p. 547).
The derivation from axioms contra Computationalism is reprised
(from Searle 1984a, 1989a,
1990a). Characterizing "the
question of which systems are causally capable of producing consciousness
and intentionality" as "a factual issue" Searle relies
on renewed appeal to the need for "causal powers . . . at least
equal to those of human and animal brains" to implicate the
inadequacy of actual computers for "producing consciousness
and intentionality" (p. 547).
Searle, J., (1999), The
Mystery of Consciousness, New York: A New York Review
This book is based
on several consciousness-related-book reviews by Searle that were
originally published in the New
York Review of Books (1995-1997).
Notably, it includes Daniel Dennett's reply to Searle's review of
Explained (and Searle's
response) and David Chalmers' reply to Searle's review of The Conscious Mind (and
Searle's response). Though in defending the Chinese room argument
against Dennett, Searle bristles, "he misstates my position
as being about consciousness rather than about semantics" (p.
128), The Mystery of Consciousness, ironically, features the Chinese room
argument quite prominently; beginning, middle, and end.
Chapter One re-rehearses
the thought experiment and re-presents the intended argument as
"a simple three-step structure" as elsewhere (cf.,
1984a, 1989a, 1990a,
1994). Its validity is high-handedly presumed
("In order to refute the argument you would have to show that
one of those premises is false") and its premises touted as
secure ("that is not a likely prospect" (p. 13)),
as always, despite, as Searle himself notes, "over a hundred
published attacks" (p. 11). To these attacks, Dennett
complains, Searle "has never . . . responded in detail":
rather, Dennett notes, despite "dozens of devastating criticisms,"
Searle "has just presented the basic thought experiment over
and over again" (p. 116) unchanged. Unchanged, but not,
I observe, unsupplemented; as it is here. Searle continues,
"It now seems to me that the Chinese Room Argument, if anything,
concedes too much to Strong AI in that it concedes that the theory
is at least false," whereas, "I now think it is incoherent"
because syntax "is not intrinsic to the physics of the system
but is in the eye of the beholder" (p. 14). If I choose
to interpret them so, Searle explains, "Window open = 1, window
closed = 0" (p. 16). On yet another interpretation (to
cite an earlier formulation) "the wall behind my back is implementing
the Wordstar program, because there is some pattern of molecule
movements under which is isomorphic to the formal structure of Wordstar"
and "if it is a big enough wall it is implementing any program"
1990c, p.27). This supplemental
argument "is deeper," Searle says, than the Chinese room
argument. The Chinese room argument "showed semantics
was not intrinsic to syntax"; this argument "shows that
syntax is not intrinsic to physics" (p. 17). (C.f., Searle 1992,
Since the Chinese
room argument is so "simple and decisive" that Searle
is "embarrassed to have to repeat it" (p. 11) - yet has
so many critics - it must be we critics misunderstand: so Searle steadfastly maintains. We
think the argument is about consciousness somehow, or that it's "trying to prove
that `machines can't think' or even `computers can't think'"
when, really, it's directed just at the "Strong AI" thesis
that "the implemented
program, by itself, is sufficient for having a mind" (p. 14). This oh-how-you-misunderstand-me
plaint is familiar (c.f., Searle 1984a, 1990a, 1994))
and fatuous. Searle takes it up again, in conclusion here,
where he explains,
do not offer a proof that computers are not conscious.
Again, if by some miracle all Macintoshes suddenly became conscious,
I could not disprove the possibility. Rather I offered
a proof that computational operations by themselves, that is
formal symbol manipulations by themselves, are not sufficient
to guarantee the presence of consciousness.
The proof was that the symbol manipulations are defined in abstract
syntactical terms and syntax by itself has no mental content,
conscious or otherwise. Furthermore, the abstract symbols
have no causal powers to cause consciousness because they have
no causal powers at all. All the causal powers are in
the implementing medium. A particular medium in which
a program is implemented, my brain for example, might independently
have causal powers to cause consciousness. But the operation
of the program has to be defined totally independently of the
implementing medium since the definition of the program is purely
formal and thus allows implementation in any medium whatever.
Any system - from men sitting on high stools with green eyeshades,
to vacuum tubes, to silicon chips - that is rich enough and
stable enough to carry the program can be the implementing medium.
All this was shown by the Chinese Room Argument. (pp. 209-210)
Here it is all about
consciousness, yet Searle bristled that Dennett "misstates
my position as being about consciousness rather than about semantics"
(p. 128). Searle is right: I don't
if it all comes down to programs as abstract entities having no
causal powers as such - no power in
abstraction to cause consciousness
intentionality or anything - then The Chinese Room Argument is gratuitous.
"Strong AI," thus construed, is straw AI: only implemented
programs were ever candidate thinkers in the first place.
It takes no fancy "Gedankenexperiment" or "derivation from axioms"
to show this! Even the
Law of Universal Gravitation
is causally impotent in the
abstract - it is only as
instanced by the shoe and the
earth that the shoe is caused
to drop. Should we say, then, that the earth has the power
to make the shoe drop independently of gravitation? Of course not.
Neither does it follow from the causal powers of programs being
powers of their implementing media (say brains) that these
media (brains) have causal powers to cause consciousness "independently"
of computation. That brains "might," for all we
know, produce consciousness by (as yet unknown) noncomputational
means, I grant. Nothing in the Chinese room, however, makes
the would-be-empirical hypothesis that they do any more probable
here, the supplemental Wordstar-on-the-wall argument - though more
as a substitute
a supplement. It does not so much take up where the Chinese
room argument leaves off as take over the whole burden: to show
that that the brain's computational power isn't in the brain in the objective way that gravitational
the earth; that computation, unlike gravitation, is "in the
eye of the beholder." Along these lines, in response
to Chalmers, Searle complains,
candidates for explaining consciousness, "functional organization"
and "information" are nonstarters, because as he uses
them, they have no causal explanatory power. To the extent you
make the function and the information specific, they exist only
relative to observers and interpreters. (p. 176)
Chalmers has since replied:
claim is quite false. Searle has made it a number of times,
generally without any substantive supporting argument. I argue
in Chapter 9 of the book, and in more detail in my papers "A Computational Foundation for
the Study of Cognition"
and "Does a Rock Implement Every Finite-State
the relevant notions can be made perfectly precise with objective
criteria, and are therefore not at all observer-relative. If
a given system has a given functional organization, implements
a given computation, and therefore realizes certain information,
it does so as a matter of objective fact. Searle does not address
these arguments at all. ("On `Consciousness and the Philosophers'")
Where Chalmers finds Searle's "response"
an "odd combination of mistakes, misrepresentations, and unargued
gut reactions.." Dennett complains similarly of the unresponsiveness
of Searle's "response" to him:
trots out the Chinese Room yet one more time and has the audacity
to ask, "Now why does Dennett not face the actual argument
as I have stated it? Why does he not tell us which of
the three premises he rejects in the Chinese Room Argument?"
Well, because I have already done so, in great detail, in several
of the articles he has never deigned to answer. For instance
back in The Intentional
Stance, 1987) I explicitly
quoted his entire three premise argument and showed exactly
why all three of them are false, when given the interpretation
they need for the argument to go through! Why didn't I
repeat that 1987 article in my 1991 book? Because, unlike
Searle, I had gone on to other things.. I did, however,
cite my 1987 article prominently in a footnote (p. 436), and
noted that Searle's only response to it had been simply to declare,
without argument, that the points offered there were irrelevant.
The pattern continues; now he both ignores that challenge and
goes on to misrepresent the further criticism of the Chinese
Room that I offered in the book under review . . . . (p117)
Elsewhere, Copeland 1993
makes a careful cogent case that Searle's would-be thought experimental
counterexemplification of Computationalism is invalid. Copeland,
like Chalmers, complains that the supplemental "Wordstar-on-the-wall
argument" is "simply mistaken" (pp. 136-7).
Searle has never - to my knowledge - replied to Copeland.
The pattern continues.
With so many weighty
unmet criticisms against it, the least that can be said is that
the Chinese Room Argument is hardly "simple and decisive."
understood, the argument is simply invalid (c.f., Copeland 1993,
1997a); and issues about
what things are "by
themselves . . . sufficient to guarantee" are not simple.
Whether it can further be fairly said of the Chinese Room Argument
that "just about anyone who knows anything about the field
has dismissed it long ago" as "full of well-concealed
fallacies," as Dennett says (p. 116), depends on how you count
experts. I, for one, have
dismissed it and do find
it full of fallacies (Hauser 1993,
1997a); though the argument still has defenders
1992, Harnad 1991).
It can, I think fairly, be said, that the Chinese room argument
is a potent conversation starter, and has been a fruitful discussion
piece. Discussion of the argument has raised and are is helping
to clarify a number of broader issues concerning AI and computationalism.
It can also, I think fairly, be said that Searle's arguments pose
no clear and presently unmet challenge to claims of AI or Computationalism,
much less "proof" against them, as Searle claims ( p.
Abelson, Robert P. (1980), "Searle's argument
is just a set of Chinese symbols", Behavioral
and Brain Sciences 3:424-425.
about "intentionality" raise interesting worries about
the evidentiary disconnect between machines' representations
and the (putative) facts represented and concerning their lack of
appreciation of the "conditions for . . . falsification"
(p. 425) of their representations in particular. Nevertheless,
"Searle has not made convincing his case for the fundamental
essentiality of intentionality in understanding" (p. 425).
Hence, "we might well be humble and give the computer the benefit
of the doubt when and if it performs as well as we do" (p.
Block, Ned (1980), "What intuitions about
homunculi don't show", Behavioral and Brain
The crucial issue
with regard to the imagined homunculus in the room is "whether
the homunculus falls in the same natural kind . . . as our intentional
processes. If so, then the homunculus head does think in a reasonable
sense of the term" (p. 425); commonsense based intuitions not
withstanding. Furthermore, "the burden of proof lies with Searle
to show that the intuition that the cognitive homunculi head has
no intentionality (an intuition that I and many others do not share)
is not due to doctrinal hostility to the symbol-manipulation account
of intentionality" (p. 425).
Bridgeman, Bruce (1980), "Brains + programs
= minds", Behavioral and Brain
Searle thinks "we
somehow introspect an intentionality that cannot be assigned to
machines" (p. 427), but "human intelligence is not as
qualitatively different from machine states as it might seem to
an introspectionist" (p. 427). "Searle may well
be right that present programs (as in Schank and Abelson 1977) do
not instantiate intentionality according to his definition.
The issue is not whether present programs do this but whether it
is possible in principle to build machines that make plans and achieve
goals. Searle has given us no evidence that this is not possible"
(p. 427-8): "an adequately designed machine could include intentionality
as an emergent property even though individual parts (transistors,
neurons, or whatever) have none" (p. 427).
Danto, Arthur C. (1980), "The use and mention
of terms and the simulation of linguistic understanding", Behavioral and Brain Sciences 3:428.
Danto would "recast
Searle's thesis in logical terms", in terms of the U-properties
of words (use properties, e.g., meaning: distinguishable only by those able to
use the words) and the M-properties (mention properties, e.g., shape:
distinguishable even to those unable to use the words). This
recasting "must force [Searle's] opponents either to concede
machines do not understand" on the "evidence that in fact
the machine operates pretty much by pattern recognition" and
"Schank's machines, restricted to M-properties, cannot think
languages they simulate thinking in"; or else for them "to
abandon the essentially behaviorist theory of meaning for mental
predicates" they cling to, since "an M-specified simulation
can be given of any U-performance, however protracted and intricate"
and if we "ruthlessly define" U-terms in M-terms "then
we cannot any longer, as Schank and Abelson wish to do, explain
outward behavior with such concepts as understanding."
Dennett, Daniel (1980), "The milk of human
intentionality", Behavioral and Brain
is "sophistry" - "tricks with mirrors that give his
case a certain spurious plausibility": "Searle relies
almost entirely on ill-gotten gains: favorable intuitions generated
by misleadingly presented thought experiments." In particular
Searle's revisions to the experiment in response to the robot reply
reply taken together present
"alternatives so outlandishly unrealizable as to caution us
not to trust our gut reactions in any case." "Told
in detail the doubly modified story suggests either that there are
two people, one of whom understands Chinese, inhabiting one body,
or that one English-speaking person has, in effect, been engulfed
within another person, a person who understands Chinese" (cf.,
On Searle's view
"the `right' input-output relations are symptomatic but not
conclusive or criterial evidence of intentionality: the proof of
the pudding is in the presence of some (entirely unspecified) causal
properties that are internal to the operation of the brain" (p.
429). Since Searle "can't really view intentionality
as a marvelous mental fluid", his concern with the internal
properties of control systems appears to be a misconceived attempt
to capture the interior point
of view of a conscious agent"
(p. 429). Searle can't see "how any mere computer, chopping
away at a formal program could harbor such a point of view"
because "he is looking too deep" into "the synapse filled jungles
of the brain" (p. 430). "It is not at that level
of description that a proper subject of consciousness will be found"
but rather at the systems level: the systems reply is "a step
in the right direction" and that is "away from [Searle's]
updated version of élan
vital" (p. 430).
Eccles, John C. (1980), "A dualist-interactionist
perspective", Behavioral and Brain
Though Searle asserts
that "the basis of his critical evaluation of AI is dependent
on" (p. 430) the proposition, "`Intentionality in human
beings (and animals) is a product of causal features of the brain"
this unsupported invocation of "a dogma of the psychoneural
identity theory" (431) does not figure crucially in his arguments
against strong AI. Thus Eccles finds, "Most of Searle's
criticisms are acceptable for dualist interactionism"; and
he agrees with Searle, "It is high time that Strong AI was
Fodor, J. A. (1980), "Searle on what
only brains can do", Behavioral and Brain
Fodor agrees, "Searle
is certainly right that instantiating the same program that the
brain does is not, in and of itself, a sufficient condition for
having those propositional attitudes characteristic of the organism
that has the brain" but finds "Searle's treatment of the
. . . quite unconvincing":
"All that Searle's example shows is that the kind of causal
linkage he imagines - one that is, in effect, mediated by a man
sitting in the head of a robot - is, unsurprisingly, not the right
kind. Though we "don't know how to say what the right
kind of causal linkages [to endow syntax with semantics] are, nevertheless,
"Searle gives no clue as to why . . .. the biochemistry is
important for intentionality and, prima facie, the idea that what
counts is how the organism is connected to the world seems far more
plausible." Furthermore, there is "empirical evidence
for believing that `manipulation of symbols' is involved in mental
processes"; evidence deriving "from the considerable success
of work in linguistics, psychology, and AI that has been grounded
in that assumption."
Haugeland, John (1980), "Programs, causal
powers, and intentionality", Behavioral
and Brain Sciences 3:432-433.
In the first place,
Searle's suggestion "that only objects (made of the stuff)
with `the right causal powers' can have intentionality" is
"incompatible with the main argument of his paper": whatever
causal powers are supposed to cause intentionality "a superfast
person - whom we might as well call
`Searle's demon'" - might take
powers presumably (as per
without understanding; showing these (biochemical or whatever) factors
to be insufficient for intentionality too!
Dismissing the demon
argument (and with it Searle's thought experiment) Haugeland
characterizes the central issue as "what differentiates original
from derivative intentionality" - the "intentionality
that a thing (system, state, or process) has `in its own right'"
- from intentionality "that is `borrowed from' or `conferred
by' something else." "What Searle objects to is
the thesis, held by many, that good-enough AI systems have (or will
eventually have) original intentionality." It is a plausible
claim that what distinguishes systems whose states have original
intentionality is that these states are "semantically active"
through being "embodied in a `system' that provides `normal
channels' for them to interact with the world" like thought,
and unlike text. "It is this plausible claim that underlies
the thesis that (sufficiently developed) AI systems could actually
intelligent, and have original intentionality. For a case can surely
be made that their `representations' are semantically active (or,
at least, that they would be if the system were built into a robot)"
(cf., the robot
sympathizes with Searle's denial that good-enough AI systems have
(or will eventually have) original intentionality. Not for
Searle's demon-based reason - that no matter how much semantically
appropriate interactivity a program had it wouldn't
count as semantics (since
the demon might have the same). Rather, for a much more "nitty-gritty
empirical" reason: Haugeland doubts whether programming can
capture or impart the appropriate type and degree of system-world
interactivity. Again, not because, if there were such a program, it still wouldn't suffice (as Searle argues), "but
because there's no such program": none is or ever will be good-enough.
Speculation aside, at least, "whether
there is such a program,
and if not, why not are . . . the important questions."
Hofstadter, Douglas R. (1980), "Reductionism
and Religion", Behavioral and Brain
is a "religious diatribe against AI masquerading a a serious
scientific argument." Like Hofstadter himself, Searle
"has deep difficulty in seeing how mind, soul, `I,' can come
out of brain, cells, atoms"; but while claiming to accept this
fact of nature, Searle will not accept the consequence that, since,
physical processes "are formal, that is, rule governed"
`(except "at the level of particles"), "`intentionality'
. . . is an outcome of formal processes." Searle's thought
experiment provides no real evidence to the contrary because "the
initial situation, which sounds plausible enough, is in fact highly
unrealistic", especially as concerns time scale. This
is fatal to the experiment since "any time some phenomenon
is looked at on a scale a million times different from its familiar
scale, it doesn't seem the same!" Thus, "what Searle
is doing" is "inviting you to identify with a nonhuman
which he lightly passes off as human, and by so doing he invites
you to participate in a great fallacy."
Libet, B. (1980), "Mental phenomena and behavior",
Behavioral and Brain Sciences 3:434.
thought experiment "shows, in a masterful and convincing manner,
that the behavior of the appropriately programmed computer could
transpire in the absence of a cognitive mental state" Libet
believes "it is also possible to establish the proposition
by means of an argument based on simple formal logic. In general,
where "systems A and B are known to be different, it is an
error in logic to assume that because systems A and B both have
property X, they must both also have property Y". From
this, Libet urges, it follows that "no behavior of a computer,
regardless of how successful it may be in simulating human behavior,
is ever by itself sufficient evidence of any mental state."
While he concurs with Searle's diagnosis of "why so many
people have believed that computer programs do impart a kind of
mental process or state ot the computer" - it's due to their
"residual behaviorism or operationalism" underwriting
"willingness to accept input-output patterns as sufficient
for postulating . . . mental states" - Libet here proposes
a cure more radical even than Searle. Libet deems Searle's
admission (in response to the combination
reply) that it would be
to attribute intentionality to "a robot whose behavior was
indistinguishable over a large range from human behavior . . . pending some reason not to" [my emphasis] - too concessive.
"On the basis of my argument," Libet asserts, one would
not have to know that the robot had a formal program (or whatever)
that accounts for its behavior, in order not to have to attribute
intentionality to it. All we need to know is that the robot's
internal control apparatus is not made in the same way and out of
the same stuff as is the human brain."
Lycan, William G. (1980), "The functionalist
reply (Ohio State)", Behavioral and Brain
(among others) effectively refutes behaviorism, the view that "if an organism
or device D passes the Turing test, in the sense of
systematically manifesting all the same outward behavioral dispositions
that a normal human does, then D has all the same sorts of contentful or
intentional states that humans do. But Searle's would-be counterexamples
have no such force as advertised against functionalism, "a more species-chauvinistic view"
according to which "D's manifesting all the same sorts of behavioral
dispositions we do does not alone suffice for D's having intentional states: it is necessary
in addition that D produce the behavior from stimuli in roughly the way that we do," i.e., that D's "inner procedures" and "inner
functional organization" should be "not unlike ours."
Lycan accepts Searle's judgment that neither Searle nor the room-Searle
system nor the room-Searle-robot system understands; but this is
not at all prejudicial to functionalism, he maintains, for the simple
reason that the imagined systems "are pretty obviously not
functionally isomorphic at the relevant level to human beings who
do understand Chinese." Lycan pitches the relevant level
fairly low and expresses "hopes for a sophisticated version
of the `brain simulator' (or the `combinationmachine') that Searle
illustrates with his plumbing example."
with Searle's intuitions about the imagined systems (except the
"combination machine") Lycan endorses a theoretical
point that Searle's subsequent presentations have come more and
more prominently to feature (cf., Searle 1984a,
1990a). Lycan puts
it thus: "A purely formally or syntactically characterized
element has no meaning or content in itself, obviously, and no amount
of mindless syntactic manipulation of it will endow it with any.
Lycan further agrees that this "shows that no computer has
or could have intentional states merely
in virtue of performing syntactic operations on formally characterized
elements. But that
does not suffice to prove that no computer can have intentional
states at all," as Searle seems to think. Our
brain states do not have the contents they do just in virtue of having their purely
formal properties either" (my emphases): "the [semantic]
content of a mental representation is not determined within its
owner's head (Putnam 1975a; Fodor
1980[b]): rather it is determined
in part by the objects in the environment that actually figure in
the representation's etiology and in part by social and contextual
factors of other sorts." (Searle 1983
tries mightily - and, in my opinion, fails miserably - to counter
such "semantic externalism".)
Given his considerable
agreement with Searle's intuitions and principles, perhaps not unsurprisingly,
in the end, Lycan concludes less with a bang than a whimper that
"nothing Searle has said impugns the thesis that if a sophisticated
future computer not only replicated human functional organization
but harbored its inner representations as a result of the right
sort of causal history and had also been nurtured with a favorable
social setting, we might correctly ascribe intentional states to
McCarthy, John, (1980), "Beliefs, machines,
and theories", Behavioral and Brain
dismissal of the idea that thermostats may be ascribed belief,"
McCarthy urges, "is based on a misunderstanding.
It is not a pantheistic notion that all machinery including telephones,
light switches, and calculators believe. Belief may usefully
be ascribed only to systems about which someone's knowledge can
best be expressed by ascribing beliefs that satisfy axioms [definitive
of belief] such as those in McCarthy
(1979). Thermostats are sometimes
such systems. Telling a child, `If you hold the candle under
the thermostat, you will fool it into thinking the room is too hot,
and it will turn off the furnace' makes proper use of the child's
repertoire of mental concepts."
the case of the Chinese room, McCarthy maintains "that the
system understands Chinese" if "certain other conditions
are met": i.e., on the condition that someone's knowledge of
this system can best be expressed
by ascribing states that satisfy axioms definitive of understanding.
Marshall, John C. (1980), "Artificial intelligence
- the real thing?", Behavioral and Brain
himself incredulous that anyone at present could actually believe computers "literally have cognitive states,"
Marshall points out that programming might endow systems with intelligence
without providing a theory or explanation of that intelligence.
Furthermore Searle is misguided in his attempts to belabor "the
everyday use of mental vocabulary." "Searle writes,
`The study starts with such facts as that humans have beliefs, while
thermostats, telephones, and adding machines don't'": Marshall
replies, "perhaps it does start there, but that is no reason
to suppose it must finish there": indeed the "groping"
pursuit of "levels of description" revealing "striking
resemblances between [seemingly] disparate phenomena" is the
way of all science; and "to see beyond appearances to a level
at which there are profound similarities between animals and artifacts" (my emphasis) is the
way the mechanistic scientific enterprise must
proceed in psychology as in biology more generally. It is "Searle,
not the [cognitive] theoretician, who doesn't really take the enterprise
seriously." His unseriousness is especially evident in
the cavalier way he deals with - or rather fails to deal with -
the other minds problem.
Maxwell, Grover (1980), "Intentionality: Hardware,
not software", Behavioral and Brain
that "Searle makes exactly the right central points and
supports them with exactly the right arguments" Maxwell explores
"some implications of his results for the overall mind-body
problem." Assuming that "intentional states are
genuinely mental in the what-is-it-like-to-be-a-bat? sense"
- i.e., accepting Searle's later-named thesis of "ontological subjectivity" - Maxwell finds the
argument weighs heavily against eliminativism and reveals functionalism
to be "just another variety" thereof. The argument's
"main thrust seems compatible with interactionism, with epiphenomenalism,
and with at least some versions of the identity thesis. Maxwell sketches his own version of the identity
thesis according to which mental events are part of the hardware
of `thinking machines'" and "such hardware must somehow
be got into any machine we build" before it would be thinking.
"Be all this as it may," he concludes, "Searle has
shown the total futility of the strong AI route to genuine artificial
Menzel, E. W. Jr. (1980), "Is the pen mightier
than the computer", Behavioral and Brain
"by, convention if nothing else, in AI one must ordinarily
assume, until proven otherwise, that one's subject has no more mentality
than a rock: whereas in the area of natural intelligence one can
often get away with the opposite assumption" in other respects
"the problems of inferring mental capacities are very much
the same in the two areas." Here, "the Turing test (or
the many counterparts to the test which are the mainstay of comparative
psychology)" seeks "to devise a clear set of rules for
determining the status of subjects of any species." But
"Searle simply refuses to play such games" and consequently
"does not . . . provide us with any decision rules for the
remaining (and most interesting) undecided cases." His
"discussion of `the brain' and `certain brain processes' in
this connection is not only vague" but would "displace
and complicate the problems it purports to solve": "their
relevance is not made clear," and "the problem of deciding
where the brain leaves off" - or more generally where is the
locus of cognition - "is
not as easy as it sounds." "Einstein," Menzel
notes, "used to say `My pencil is more intelligent than I am'":
pencil equipped brains acquire mental abilities in virtue being
so (among other ways) equipped and "it is only if one confuses
present and past, and internal and external happenings with each
other, and considers them a single `thing,' that `thinking' or even
the causal power behind thought can be allocate to a single `place'
Minsky, Marvin (1980), "Decentralized minds",
Behavioral and Brain Sciences 3:439-440.
the case of a mind so split into two parts that one merely executes
some causal housekeeping for the other, I should suppose that each
part - the Chinese rule computer and its host - would then have
its own separate phenomenologies - perhaps along different time
scales. No wonder the host can't `understand' Chinese very
fluently" (cf., Cole 1991a). Searle's argument, couched as it is in "traditional
ideas inadequate to this tremendously difficult enterprise"
could hardly be decisive, especially in the face of the fact that
"computationalism is the principal source of the new machines
and programs that have produced for us the first imitations, however
limited and shabby, of mindlike activity."
Natsoulas, Thomas (1980) ,The primary source
of intentionality", Behavioral and Brain
shares Searle belief that "the level of description that computer
programs exemplify is not one adequate to the explanation of mind"
as well as his emphasis on the qualitative or phenomenal content of perception (in particular) - "the qualitative
being thereness of objects and scenes" - being something over
and above the informational
The remaining question for both concerns the explanatory gap between physiology and phenomenology
or "what is the `form of realization?' of our visual [and other]
experiences that Searle is claiming when he attributes them to us."
Puccetti, Roland (1980), "The chess room:
further demythologizing of strong AI", Behavioral
and Brain Sciences 3:441-442.
"On the grounds
he has staked out, which are considerable" Puccetti deems Searle
to be "completely victorious": Puccetti wants "to
lift the sights of his argument and train them on a still larger,
very tempting target. To this end he devises a Chinese-room-like
scenario involving having "an intelligent human from a chess-free
culture" follow a the instructions of a "chess playing"
program: since he "hasn't the foggiest idea of what he's doing,"
Puccetti concludes, "[s]uch operations, by themselves, cannot
, then, constitute understanding of the game, no matter how intelligently
played." Chess playing computers "do not have the intentionality
towards the chess moves they make that midget humans had in the
hoaxes of yesteryear. They simply know now what they do."
Pylyshyn, Zenon W. (1980), "The `causal power'
of machines", Behavioral and Brain
insists that causal powers of the implementing medium under and
beneath the powers that make it an implementation are crucial
for intentionality, for Searle "the relation of equivalence
with respect ot causal powers is a refinement of the relation of
equivalence with respect to function": this has the consequence
that "if more and more of the cells in your brain were replaced
by integrated circuit chips programmed in such a way as to keep
the input-output function
of each unit
the identical to that of the unit being replaced, you would in all
likelihood just keep right on speaking exactly as you are doing
now except that you would eventually stop meaning anything by it" (cf., zombies). Furthermore the "metaphors and appeals
to intuition" Searle advances "in support of this rather
astonishing view" are opaque and unconvincing. "But
what is the right kind of stuff? Pylyshyn asks. "Is
it cell assemblies, individual neurons, protoplasm, protein molecules,
atoms of carbon and hydrogen, elementary particles? Let Searle
name the level, and it can be simulated perfectly well in `the wrong
kind of stuff'. Indeed, "it's obvious from Searle's own
argument that the nature of the stuff cannot be what is relevant,
since the monolingual English speaker who has memorized the formal
rules is supposed to be an example of a system made of the right stuff and yet it allegedly still lacks the relevant
intentionality." "What is frequently neglected
in discussions of intentionality," Pylyshyn concludes, "is
that we cannot state with any degree of precision what it is that
entitles us to claim that people refer . . . and therefore
that arguments against the intentionality of computers," such
as Searle's, "typically reduce to `argument from ignorance'.."
Rachlin, Howard (1980), "The behaviorist reply
(Stony Brook)", Behavioral and Brain
finds it "easy to agree with the negative point Searle makes
about mind and AI" - "that the mind can never be a computer
program." But Searle's "positive point . . . that
the mind is the same thing as the brain . . . is just as clearly
false as the strong AI position that he criticizes."
The "combination robot example" -"essentially a behavioral
example" - illustrates Rachlin's point. "Searle
says `If the robot looks and behaves sufficiently like us, then
we would suppose, until proven
it must have mental states like ours'" (Rachlin's emphasis).
Rachlin insists, "can only come from one place - the robot's
subsequent behavior": Searle's willingness "to abandon
the assumption of intentionality (in a robot) as soon as he discovers
that a computer was running it after all" is "a mask for
contrary to anyone's expectations, all of the functional properties
of the human brain were discovered. Then the "human
robot" would be unmasked, and we might as well abandon
the assumption of intentionality for humans too.
we should not so abandon it. "It is only the behaviorist,
it seems who is able to preserve terms such as thought, intentionality, and the like (as patterns
of behavior). The "Donovan's brain reply (Hollywood)"
shows the utter absurdity of identifying mind with brain. Let Donovan's
brain be "placed inside a computer console with the familiar
input-output machinery," taking the place of the CPU and being
"connected to the machinery by a series of interface mechanisms."
"This `robot' meets Searle's criterion for a thinking machine
- indeed it is an ideal thinking machine from his point of view"
- but it would be no less "ridiculous to say" Donovan's
brain was thinking in processing the input-output than to say the
original computer was thinking in so doing. Indeed it would
probably be even more ridiculous since a "brain designed to
interact with a body, will surely do no better (and probably a lot
worse) at operating the interface equipment than a standard computer
mechanism designed for such equipment."
Ringle, Martin (1980), "Mysticism as a philosophy
of artificial intelligence", Behavioral
and Brain Sciences 3:444-445.
the salient interpretation, "the term `causal powers' refers
to the capacities of protoplasmic neurons to produce phenomenal
states such as felt sensations, pains, and the like."
"But even if we accept Searle's account of intentionality"
as dependent on phenomenal consciousness, the assumption made by
his argument - that things of "inorganic physical composition"
like silicon chips, "are categorically incapable of causing
felt sensations" - "still seems to be untenable."
mere fact that mental phenomena such as felt sensations have
been, historically speaking, confined to protoplasmic organisms
in no way demonstrates that such phenomena could not arise in
a nonprotoplasmic system. Such an assertion is on a par
with a claim (made in antiquity) that only organic creatures
such as birds or insects could fly.
Searle "never explains what sort of biological phenomenon it
is, nor does he ever give us a reason to believe there is a property
inherent in protoplasmic neural matter that could not, in principle,
be replicated in an alternative physical, substrate," even
in silicon chips, "[o]ne can only conclude that the knowledge
of the necessary connection between intentionality and protoplasmic
embodiment is obtained through some sort of mystical revelation."
Rorty, Richard (1980), "Searle and the special
powers of the brain", Behavioral and Brain
claim "`that actual human mental phenomena might be dependent
on actual physical-chemical properties of actual human brains' .
. . seems just a device for insuring that the secret powers of the
brain will move further and further back out of sight every time
a new model of brain functioning is proposed. For Searle can
tell us that any such model is merely a discovery of formal patterns,
and the `mental content' has still escaped us." "If
Searle's present pre-Wittgensteinian attitude gains currency,"
Rorty fears, the good work of Ryle and Putnam will be undone and
`the mental' will regain its numinous Cartesian glow"; but
this, he predicts, "will boomerang in favor of AI.
`Cognitive scientists' will insist that only lots more simulation
and money will shed light upon these deep `philosophical' mysteries."
Schank, Roger C. (1980), "Understanding Searle",
Behavioral and Brain Sciences 3:446-447.
is "certainly right" in denying that the Script Applier
Mechanism program (SAM: Schank & Abelson 1977) can understand and consequently
he is also right in denying that SAM "explains the human ability
to understand'": "Our programs are at this stage are partial
and incomplete. They cannot be said to be truly understanding.
Because of this they cannot be anything more than partial explanations
of human abilities." Still, Searle is "quite wrong"
in his assertion "that our programs will never be able to understand
or explain human abilities" since these programs "have
provided successful embodiments of theories that were later tested
on human subjects": "our notion of a script (Schank & Abelson 1977) is very much an explanation
of human abilities."
Sloman, Aaron and Monica Croucher (1980), "How
to turn an information processor into an understander," Behavioral and Brain Sciences 3:447-448.
Sloman and Croucher
combine elements of robot and systems replies. In their view a system having
a computational architecture or form
capable of intelligent sensorimotoric
functioning in relation to things is "required before the familiar
mental processes can occur," e.g., mental processes such as
beliefs and desires about such things. "Searle's thought
experiment .. . . does not involve operations linked into an appropriate
system in an appropriate way." Anticipating Searle's
reply - that "whatever the computational architecture . . .
he will always be able to repeat his thought experiment to show
that a purely formal symbol manipulating system with that structure
would not necessarily have motives, beliefs, or percepts" for
"he would execute all the programs himself (at least in principle)
without having any of the alleged desires, beliefs, perceptions,
emotions, or whatever" - Sloman & Croucher respond, "Searle
is assuming that he is a final authority on such questions whether
what is going on in his mental activities" and "that it
is impossible for another mind to be based on his mental processes
without his knowing"; and this assumption is unwarranted.
Sloman and Croucher hypothesize "that if he really does does
faithfully execute all the program, providing suitable time sharing
between parallel subsystems where necessary, then a collection of
mental processes will occur of whose nature he will be ignorant,
if all he thinks he is doing is manipulating meaningless symbols"
Smythe, William E. (1980), "Simulation games",
Behavioral and Brain Sciences 3:448-449.
states are, by definition, `directed at' objects and states of affairs
in the world" and "this relation is not part of the computational
account of mental states" this "casts considerable doubt
on whether any purely computational theory of intentionality is
possible." While Searle's thought experiment "may
not firmly establish that computational systems lack intentionality
. . . it at least undermines one powerful tacit motivation for supposing
that they have it" deriving from the fact that the "symbols
of most AI and cognitive simulations systems are rarely the kind
of meaningless tokens that Searle's simulation game requires."
"Rather, they are often externalized in forms that carry a
good deal of surplus meaning to the user, over and above their procedural
identity in the systems itself, as pictorial and linguistic inscriptions,
for example.." "An important virtue of Searle's
argument is that it specifies how to play the simulation game correctly"
such that "the procedural realization of the symbols"
is all that matters.
Walter, Donald O. (1980), "The thermostat
and the philosophy professor", Behavioral
and Brain Sciences 3: 449.
Searle "a program is formal" whereas "`intentionality'"
is "radically different" and "not definable in terms
of . . . form but of content. Searle merely, "asserts
this repeatedly, without making anything explicit of this vital
alternative": such explication is owed before Searle's argument
can be credited.
Wilensky, Robert (1980), "Computers, cognition
and philosophy", Behavioral and Brain
Sciences 3: 449-450.
In the Chinese room
scenario we are misled into identifying the two systems by the implementing
system being "so much more powerful than it need be.
That is, the homunculus is a full-fledged understander, operating
at a small percentage of its capacity to push around some symbols.
If we replace the man by a device that is capable of performing
only these operations, the temptation to view the systems as identical
greatly diminishes" (cf., Copeland 1993,
Wilensky observers, "it seems to me that Searle's argument
has nothing to do with intentionality at all. What causes
difficulty in attributing intentional states to the machines is
the fact that most of these states have a subjective nature as well"; so, "Searle's
argument has nothing to do with intentionality per se, and sheds
no light on the nature of intentional states or on the kinds of
mechanisms capable of having them" (cf., Searle 1999).
Bringsjord, Selmer (1992), What
Robots Can and Can't Be,
Kluwer, pp. 184-207.
In Chapter 5, "Searle,"
Bringsjord proposes a variant of John Searle's Chinese room experiment
involving an imagined idiot-savant "Jonah" who "automatically,
swiftly, without conscious deliberation" can "reduce high-level
computer programs (in, say, PROLOG and LISP) to the super-austere
language that drives a Register machine (or Turing machine)"
and subsequently "can use his incredible powers of mental imagery
to visualize a Register machine, and to visualize this machine running
the program that results from his reduction" (p. 185). The
variant is designed to be systems-reply-proof and robot-reply-proof,
building in Searle's wonted changes - internalization of the program
(against the systems reply) and added sensorimotor capacities (to
counter the robot reply) - from the outset. Bringsjord then considers
three further objections - the Churchlandsí (1990) connectionist
reply, David Coleís (1991a) multiple-personality reply, and Rapaportís
(1990) process reply - and offers rebuttals. (c.f. Hauser 1997b).
Chalmers, D. (1996), The
Conscious Mind: In Search of a Fundamental Theory, Oxford University Press, pp. 322-328.
Chinese room argument is characterized by Chalmers as an "internal objection" (p. 314)
to "Strong AI". Where external objections - e.g., H. Dreyfus (1979), H. Dreyfus
& S. Dreyfus (1986) - allege the inability of computers to do
many of the things humans do, internal objections, like the Chinese room, argue that it
wouldn't be thinking (or even evidence of thinking) anyhow, even
if they did. Though Searle's "original [1980a] version directs the argument against machine intentionality
rather than machine consciousness," Chalmers says, "All
the same, it is fairly clear that consciousness is at the root of
the matter" (p. 322). At the systems reply, Chalmers thinks, "the
argument reaches an impasse": an impasse broken, Chalmers maintains,
by his own "dancing qualia" proof (online) that "any system that has the same functional
organization at a fine enough grain will have qualitatively identical
conscious experiences" (p. 249: cf., Searle 1980a, the brain simulator reply).
Churchland, Paul, and Patricia Smith Churchland
(1990), "Could a Machine Think?", Scientific
American 262(1, January):32-39.
point up what they see as the "question-begging character of
Searle's axiom" that "Syntax
by itself is neither constitutive of nor sufficient for semantics"
p.27). Noting its similarity to the conclusion "Programs are neither constitutive of
nor sufficient for minds,"
the axiom, they complain, "is already carrying 90 percent of
the weight of this almost identical conclusion" which "is
why Searle's thought experiment is devoted to shoring up axiom 3
specifically" (p.34). The experiment's failure in this regard
is shown by imagining an analogous "refutation" of the
electromagnetic theory of light involving a man producing electromagnetic
waves by waving a bar magnet about in a dark room; observing the
failure of the magnet waving to illuminate the room; and concluding
that electromagnetic waves "are
neither constitutive of nor sufficient for light" (p.35). The intuited "semantic darkness"
in the Chinese Room no more disconfirms the computational theory
of mind than the observed darkness in the Luminous Room disconfirms
the electromagnetic theory of light. Still, the Churchlands, like
Searle, "reject the Turing test as a sufficient condition for
conscious intelligence" and agree with him that "it is
also very important how the input-output function is achieved; it
is important that the right sorts of things be going on inside the
artificial machine"; but they base their claims "on the
specific behavioral failures of classical [serial symbol manipulating]
machines and on the specific virtues of [parallel connectionist]
machines with a more brainlike architecture" (p.37). The brainlike
behavioral virtues of such machines - e.g., fault tolerance, processing
speed, and near instantaneous data retrieval (p.36) - suggest, contrary
to Searle's "common-sense intuitions," that "a neurally
grounded theory of meaning" (p.37) will confirm the claims
of future "nonbiological but massively parallel" machines
to true (semantics laden) artificial intelligence (p.37) - the Connectionist
(I call it). Searle's (1990a)
"Chinese gym" version of the experiment - targeting connectionism
- seems "far less responsive or compelling than his first [version
of the experiment]" (p.37). First, "it is irrelevant that
no unit in his system understands Chinese since . . . no neuron
in my brain understands English." Then there is the heightened
implausibility of the scenario: a true brain simulation "will
require the entire human populations of over 10,000 earths"
David (1991a), Artificial Intelligence and Personal Identity. Synthese
Searle's `Chinese Room' argument," Cole allows, "shows
that no computer will ever understand English or any other natural
language." Drawing on "considerations raised by
John Locke and his successors (Grice, Quinton, Parfit, Perry and
Lewis) in discussion of personal identity," Cole contends Searle's
result "is consistent with the computer's causing a new entity
to exist (a) that is not identical with the computer, but (b) that
exists solely in virtue of the machine's computational activity,
and (c) that does understand English." "This
line of reasoning," Cole continues, " reveals the abstractness
of the entity that understands, and so the irrelevance of the fact
that the hardware itself does not understand." "Thus,"
he concludes, "Searle's argument fails completely to show any
limitations on the present or potential capabilities of AI"
Copeland, B. J. (1993), Artificial
Intelligence: A Philosophical Introduction,
Blackwell, pp. 121-139 & pp. 225-230.
Chapter 6, titled "The Strange Case of the Chinese Room,"
Copeland undertakes "careful and cogent refutation"
(p. 126) of Searle's argument, pursuing the systems reply. This reply, Copeland
thinks, reveals the basic "logical flaw in Searle's argument"
(p. 126). The Chinese room argument invites us to infer the
absence of a property (understanding) in the whole
(system) from lack of understanding in one part
(the man); and this is invalid. The argument commits the fallacy
Searle "believes he has shown the systems reply
to be entirely in error" (p. 126: my emphasis)!
Consequently, Copeland, proposes to "take Searle's objections
one by one" to "show that none of them work" (p.126).
He identifies and carefully examines four lines of Searlean resistance
to the systems reply, debunking each (I think successfully).
first Searlean line of resistance portrays the systems reply
as simply, intuitively, preposterous. As Searle has it, "the
idea that somehow " the conjunction of that person and bits of paper might understand"
Chinese is ridiculous (Searle 1980a, p. 419). Copeland
agrees "it does sound silly to say the man-plus-rulebook
understands Chinese even while it is simultaneously true that
the man doesn't understand" (p. 126); but to understand
why it sounds silly is to
see that the apparent silliness does not embarrass the systems
reply. First, since the fundamental issue concerns computational
systems in general, the inclusion of a man in the room is an inessential detail "apt
to produce something akin to tunnel vision": "one
has to struggle not to regard the man in the room as the only
possible locus of Chinese-understanding" (p. 126).
Insofar as it depends upon this inessential detail in the thought
experimental setup, the "pull towards Searle's conclusion"
is "spurious" (p. 126). The second reason the
systems reply sounds silly in this particular case (of the Chinese room) is that "the wider
system Searle has described is itself profoundly silly.
No way a man could handwork a program capable of passing a Chinese
Turing test" (p. 126). Since the intuitive preposterousness
Searle alleges against the systems reply is so largely an artifact
of the "built-in absurdity of Searle's scenario" the systems reply is scarcely impugned.
"It isn't because the systems reply is at fault that
it sounds absurd to say that the system [Searle envisages] .
. . may understand Chinese" (p. 126); rather it's due to
the absurdity and inessential features of the system envisaged.
Searle alleges that the systems reply "begs the question
by insisting without argument that the system understands Chinese"
(Searle 1980a, p. 419). Not so. In challenging
the validity of Searle's inference from the man's not understanding
to the system's not understanding, Copeland reminds us, he in
no way assumes
that the system understands. In fact in the
case of the system Searle actually envisages - modeled on Schank
and Abelsons' "Script Applier Mechanism" - Copeland
thinks we know this is false! He cites Schank's own confession that
"No program we have written can be said to truly understand"
(p.128) in this connection.
Copeland considers the rejoinder Searle himself fronts.
The "swallow-up stratagem" Weiss 1990 calls it: "let the
individual internalize all the elements of the system"
(Searle 1980a, p. 419). By this stratagem Searle would
scotch the systems reply, as Copeland puts it, "by retelling
the story so there is no `wider system'" (p. 128).
The trouble is that, thus revised, the argument would infer
absence of a property (understanding) in the part (the room-in-the-man) from its absence
in the whole (man). This too
is invalid. Where the original version commits a fallacy
of composition the revision substitutes a fallacy of division; to no avail, needless
Copeland considers Searle's insistence, against the systems
reply, that there is "no way the system can get from the syntax to the semantics"
(Searle 1984a, p. 34: my emphasis) either. Just as "I as the central processing
unit [in the Chinese room scenario] have no way of figuring
out what any of these symbols means," Searle explains,
"neither does the system" (Searle 1984a, p. 34). Here,
as Copeland points out, it is Searle who begs the question:
"The Chinese room argument is supposed to prove Searle's thesis that mere symbol manipulation
cannot produce understanding, yet Searle has just tried to use
this thesis to defend the Chinese room argument
against the systems reply" (p. 130)
Searle's supporting argument Copeland proceeds to discuss Searle's thesis
that "there is no way the system can get from the syntax to
the semantics." In this connection, Copeland imagines
a souped-up robot ensconced descendant of SAM - Turbo Sam - trained
up until he "interacts with the world as adeptly as we do,
even writes poetry." Whether to count Turbo Sam as understanding (among other things) his own poetry
amounts to "a decision on whether or not to extend to an artefact
terms and categories that we currently apply only to each other
and our biological cousins"; and "if we are ever confronted
with a robot like Turbo Sam we ought to say it thinks" (p. 132: my emphasis).
"Given the purpose for which we apply the concept of a thinking
thing," Copeland thinks, "the contrary decision would
be impossible to justify" (p. 132). The real issue, as
Copeland sees it, is "whether a device that works by [symbol
manipulation] .. . . can be made to behave as I have described Turbo
Sam as behaving" (132): The Chinese room argument is
a failed attempt to settle this empirical question by a priori philosophical
The concluding section
of Chapter 6 first debunks Searle's "biological objection"
as fatally dependent on the discredited Chinese room argument for
support of its crucial contention that it "is not possible
to endow a device with the same [thought causing] powers as the
human brain by programming it" (p. 134), then goes on to dispute
Searle's contention that "for any object there is some description
under which that object is a digital computer" (Searle 1990c,
p. 27). This "Wordstar-on-the-wall argument"
- which would trivialize claims of AI, if true - is, itself,
off the wall. Searle is "simply mistaken in his belief
that the `textbook definition of computation' implies that his wall
is implementing Wordstar" (pp. 136-7). Granting
"that the movements of molecules [in the wall] can be described
in such a way that they are `isomorphic' with a sequence of bit
manipulations carried out by a machine running Wordstar" (p.
137); still, this is not all there is to implementing Wordstar.
The right counterfactuals
must also hold (under the
same scheme of description); and they don't in the case of the wall. Consequently,
Searle fails to make out his claim that "every object has a
description under which it is a universal symbol system."
There is, Copeland asserts, "in fact every reason to believe
that the class of such objects is rather narrow; and it is an empirical
issue whether the brain is a member of this class" (p. 137).
Chapter 10, titled
"Parallel Distributed Processing" (PDP), takes up the
cudgel against Searle's (1990a) Chinese gym variant of his argument, a
variant targeting PDP and Connectionism. Here, amidst much
nicely nuanced discussion, Copeland makes a starkly obvious
central point: "the new [Chinese gym] version of the Chinese
room commits exactly the same fallacy [of composition] as the old
Dennett, Daniel (1987), "Fast Thinking,"
Behavioral and Brain Sciences 3: 428-430.
"Having a program
- any program by itself - is not sufficient for semantics"
is Searle's stated conclusion. Dennett observes that it is
"obvious (and irrelevant)" that "no computer program
`by itself'" - as a "mere sequence of symbols" or
even "lying unimplemented on the shelf" - "could
`produce intentionality'" (p. 324-325). Only the claim
"that no concretely implemented running computer program could
`produce intentionality'" is "a challenge to AI"
(p. 425); so this is how the argument's conclusion must be construed
to be of interest. When the argument's premises are reconstrued
as needed to support this conclusion,
premises are, at best, dubious..
"Programs are purely formal
(i.e., syntactic) syntactical" - apropos program runs
- is false.
of `embodiment' are included in the specification of a program,
and are considered essential to it, then the program is not
a purely formal object at all . . . and without some
details of embodiment being fixed - by the internal semantics
of the machine language in which the machine is ultimately written
- a program is not even a syntactic object, but just a pattern
of marks inert as wallpaper. (p. 336-337)
"Syntax is neither equivalent
to nor sufficient for semantics" - apropos program runs -
is a dubious prediction. More likely,
- the `right program' on a suitably fast machine - is
sufficient for derived intentionality, and that is all the
intentionality there is. (p. 336)
mental contents" - even this - is a dubious proposition if
content is "viewed, as Searle can now be seen to require, as
a property to which the subject has conscious privileged access"
(p. 337). Searle is
required to so view it since
his case "depends on the `first-person point of view' of the
fellow in the room"; so, "that is the crux for Searle:
consciousness, not `semantics'" (p. 335).
Steven (1991), "Other bodies, other minds: a machine incarnation
of an old philosophical problem", Minds and Machines
Searle's Chinese Room Experiment as a reason for preferring his
proposed Total Turing Test (TTT) to Turing's original "pen
pal" test (TT). By "calling for both linguistic
robotic capacity," Harnad contends, TTT is rendered "immune
to Searle's Chinese Room Argument" (p. 49) because "mere
sensory transduction can foil Searle's argument": Searle in
the room must "perform all the internal activities of the machine
. . . without displaying the critical mental function in question"
yet, if "he is being the device's sensors . . . then he would
in fact be seeing!" (p. 50). Though thwarted by transduction,
Harnad thinks that as an "argument against the TT and symbol
manipulation" the Chinese room has been "underestimated"
(p. 49). The Chinese room, in Harnad's estimation, adequately
shows "that symbol manipulation is not all there is to mental
functions and that the linguistic version of the Turing Test just
isn't strong enough, because linguistic communication could in principle
(though perhaps not in practice) be no more than mindless symbol
manipulation" (p. 50). "AI's
favored `systems reply'" is a "hand-waving" resort
to "sci-fi fantasies," and the Churchland's (1990)
"luminous room" rests on a false analogy.
Harnad sees that
the Chinese Room Experiment is not, in the first place, about intentionality
(as advertised), but about consciousness therein/thereof: "if there weren't
something it was like [i.e., a conscious or subjective experience]
to be in a state that is about something" or having intentionality
"then the difference between "real" and "as-if"
intentionality would vanish completely" (p. 53 n. 3), vitiating
the experiment. Acknowledging this more forthrightly than
Searle (1999 ), Harnad faces the Other-Minds Problem
arising from such close linkage of consciousness to true ("intrinsic")
mentality as Harnad insists on, in agreement Searle (cf. Searle
1992). Your "own private experience"
being the sole test of whether your
mentality is intrinsic,
on this view, it seems there "is in fact, no evidence for me
that anyone else but me has a mind" (p. 45). Remarkably,
Harnad accepts this: no behavioral (or otherwise public test) provides
evidence of (genuine intrinsic) mentation "at all, at least
no scientific evidence" (p. 46). Regrettably, he never
explains how to reconcile this
contention (that no public
test provides any evidence of true [i.e., private] mentation)
with his contention that TTT (a public test itself) is a better
empirical test than TT. (Hauser
1993b replies to this article.)
Larry (1993a), Searle's Chinese Box: The Chinese Room Argument
and Artificial Intelligence, Michigan State University
Chinese room argument
(chap. 2). Furthermore, the supporting Chinese room thought experiment is not robust (similar scenarios
yield conflicting intuitions), fails to generalize other mental
states (besides understanding) as claimed, and depends for its credibility
on a dubious tender of epistemic privilege - the privilege to override
all external or "third person" evidence - to first person
(dis)avowals mental properties like understanding (chap. 3).
Searle's Chinese-room-supporting (1980b) contention that everyday predications of mental
terms to computers are discountable as equivocal (figurative) "as-if"
attributions is unwarranted: standard ambiguity tests evidence the
univocality of such attributions (chap. 4). Searle's further
would-be-supporting differentiation of intrinsic intentionality (ours) from as-if intentionality (theirs)
is untenable. It depends either on dubious doctrines of objective intrinsicality according to which meaning
is literally in the head (chap. 5); or else it depends on even more
dubious doctrines of subjective
according to which meaning is "in" consciousness (chap.
Larry (1993b), Reaping the Whirlwind: Reply to Harnad's Other Bodies
Other Minds. Minds and Machines 3, pp. 219-238.
"robotic upgrade" of Turing's Test (TT), from a test of
linguistic capacity alone to a Total Turing Test (TTT) of linguistic
sensorimotor capacity - to protect against the Chinese room experiment
- conflicts with his claim that no behavioral test provides even
probable warrant for mental attributions. The evidentiary
impotence of behavior - on Harnad's view - is due to the ineliminable
consciousness of thought ( c.f., Searle's 1990f
"Connection Principle") and there being "no evidence"
1991, p. 45) of consciousness
besides "private experience" (Harnad 1991,
p. 52). I agree with Harnad that distinguishing real from
"as if" thought on the basis of (presence or lack of)
consciousness - thus rejecting Turing or other behavioral testing
as sufficient warrant for mental attribution - has the skeptical consequence Harnad accepts:
"there is in fact no
evidence for me that anyone
else but me has a mind" (Harnad 1991,
p. 45). I disagree with his acceptance of it! It would be better to give
up the neo-Cartesian "faith" ( Harnad 1991,
p. 52) in private conscious experience underlying Harnad's allegiance
to Searle's controversial Chinese Room Experiment than to give up
all claim to know others think. It would be better to allow that
(passing) Turing's Test evidences - even strongly evidences - thought.
While Harnad's allegiance
to the Connection Principle causes him to overestimate the force
of Searle's argument against computationalism and against Turing's
test (TT), he is further mistaken in thinking his "robotic
upgrade" (TTT) confers any special immunity to Searle's thought
experiment. Visual transduction can be unconscious, as in
"blindsight," which will be "as-if seeing" by
Harnad's and Searle's lights. So, by these lights Searle can transduce visual input without actually
(i.e., consciously) seeing. "If the critical mental function
in question is not required to be conscious (as I advocate),
then TT and TTT are both immune to Searle's example. If the critical
mental function in question is required to be conscious (as Harnad advocates),
then both TT and TTT are vulnerable to Searle's example, perhaps"
Hauser, Larry (1997a), "Searle's Chinese Box:
Debunking the Chinese Room Argument," Minds
and Machines 7: 199-226.
presentation suborns a fallacy: Strong AI or Weak
AI; not Strong AI (by the Chinese room experiment);
This equivocates on "Strong AI" between "thought
is essentially computation" (Computationalism), and
"computers actually (or someday will) think" (AI Proper).
The experiment targets Computationalism . . . but Weak AI (they
simulate) is logically opposed to AI Proper (they think), not to
Computationalism. Taken as targeting AI Proper, the Chinese
room is a false dichotomy wrapped in an equivocation. Searle's
invocation of "causal powers (at least) equivalent to those
of brains" in this connection (against AI Proper) is similarly
equivocal. Furthermore, Searle's advertised "derivation
from axioms" targeting Computationalism is, itself, unsound.
Simply construed it's simply invalid and unsimply construed (as
invoking modalities and second order quantification) - since program
runs (what's at issue) are not
purely syntactic (as Searle's first "axiom" asserts they
are) - it makes a false assumption.
Hauser, Larry (forthcoming), "Nixin' goes
holds `the essence of the mental is the operation of a physical symbol system' (Newell 1979
as cited by Searle
1980a, p. 421: my emphasis).
Computationalism identifies minds with processes
or (perhaps even more concretely)
with implementations, not with programs "by themselves" (Searle 1999,
p. 209). But substituting "processes" or "implementations"
for "programs" in "programs are formal (syntactic)"
falsifies the premise: processes or implementations are not
purely syntactic but incorporate elements of dynamism (at least)
besides. In turn, substituting "running syntax"
or "implemented syntax" for "syntax" in "syntax
is not sufficient for semantics" makes it impossible to maintain
the conceit that this is "a conceptual truth that we knew all
1988, p. 214).. The
resulting premise is clearly an empirical hypothesis in need of
empirical support: support the Chinese room thought experiment is
inadequate to provide. The point of experiments being to adjudicate
between competing hypotheses, to tender overriding epistemic privileges
to the first person (as Searle does) fatally prejudices the experiment.
Further, contrary to Searle's failed thought experiment, there is ample evidence from
- e.g., intelligent findings and decisions of actual computers running
existing programs - to suggest
that processing does in fact suffice for intentionality. Searle's
would-be distinction between genuine attributions of "intrinsic
intentionality" (to us) and figurative attributions of "as-if"
intentionality (to them) is too facile to impugn this evidence.
Pinker, S. (1997), How
the Mind Works, W. W. Norton & Co.,
New York, pp. 93-95.
appeals to his Chinese room example, as Pinker tells it, to argue
this: "Intentionality, consciousness, and other mental phenomena
are caused not by information processing . . . but by the 'actual
physical-chemical properties of actual human brains" (p. 94).
Pinker replies that "brain tumors, the brains of mice, and
neural tissues kept alive in a dish don't understand, but their
physical chemical properties are the same as the ones of our brains."
They don't understand because "these hunks of neural tissue
are not arranged into patterns of connectivity
that carry out the right information processing" (p. 95: cf.,
Pinker endorses Paul & Patricia Churchland's (1990) electromagnetic room thought experiment as a refutation
of Searle's (1990a)
Chinese Gym variation on the Chinese room (a variant aiming to show
parallel processing don't suffice for semantics
William J. (1990), "Computer Processes and Virtual Persons:
Comments on Cole's `Artificial Intelligence and Personal Identity"',
Technical Report 90-13 (Buffalo: SUNY Buffalo Department of Computer
seeks "to clarify and extend the issues" raised by Cole 1991a, arguing , "that, in
Searle's celebrated Chinese-Room Argument, Searle-in-the-room does
understand Chinese, in spite of his claims to the contrary. He does
this in the sense that he is executing a computer `process' that
can be said to understand Chinese" (online abstract).
Anderson, David. 1987..
Is the Chinese room the real thing? Philosophy 62:389-393.
Ned. 1978. Troubles with Functionalism. In C. W. Savage, ed., Perception
and Cognition: Issues in the Foundations of Psychology, Minnesota
Studies in the Philosophy of Science, Vol. 9, 261-325. Minneapolis:
University of Minnesota Press.
Boden, Margaret A.
1988. Escaping from the Chinese room. In The philosophy of artificial intelligence, ed. Margaret Boden, 89-104.
New York: Oxford University Press. Originally appeared as Chapter
8 of Boden, Computer
models of the mind.
Cambridge University Press: Cambridge (1988).
Cam, Phillip. 1990..
Searle on strong AI. Australasian
Journal of Philosophy
1984. Programs, language understanding, and Searle. Synthese 59:219-230.
David. Absent Qualia, Fading Qualia, Dancing Qualia.
Computational Foundation for the Study of Cognition.
Does a Rock Implement Every Finite-State Automaton?
Cole, David. 1984.
Thought and thought experiments. Philosophical Studies 45:431-444.
1991b. Artificial minds: Cam on Searle. Australasian Journal of Philosophy 69(3):329-333.
Daniel. 1991. Consciousness Explained. Boston:
René. 1637. Discourse on method. Trans. John Cottingham,
Robert Stoothoff and Dugald Murdoch. In The philosophical writings
of Descartes, Vol. I, 109-151. New York: Cambridge University
J. A. 1980b. Methodological solipsism considered as
a research strategy in cognitive science. Behavioral and Brain Sciences
Fisher, John A. 1988.
The wrong stuff: Chinese rooms and the nature of understanding.
1987. In defense of artificial intelligence - a reply to John Searle.
In Mindwaves, ed. Colin Blakemore and
Susan Greenfield, 235-244. Oxford: Basil Blackwell.
Harman, Gilbert. 1990.
Intentionality: Some distinctions. Behavioral and Brain Sciences 13:607-608.
Harnad, Stevan. 1982.
Consciousness: An afterthought. Cognition and Brain Theory 5:29-47.
1989a. Minds, machines and Searle. Journal of Experimental and Theoretical Artificial
1(1):5-25. . 1989b. Editorial commentary
on Libet. Behavioral
and Brain Sciences
1990. The symbol grounding problem. Physica D 42:335-346. North Holland.
Hauser, Larry. 1992.
Act, aim, and unscientific explanation. Philosophical Investigations 15(10, October 1992):313-323.
---. 1993c. Why isn't
my pocket calculator a thinking thing? Minds and Machines 3(1, February 1993):3-10.
1993b. The sense of "thinking." Minds and Machines 3(1, February 1993):12-21.
Larry (1997b), "Review of Selmer Bringsjord's What Robots
Can and Can't Be", Minds and
Vol. 7, No. 3 (August 1997), pp. 433-438.
Hayes, P. J. 1982.
Introduction. In Proceedings
of the Cognitive Curricula Conference, vol. 2, ed. P. J. Hayes and M. M. Lucas. Rochester,
NY: University of Rochester.
Hayes, Patrick, Stevan
Harnad, Donald Perlis, and Ned. Block. 1992. Virtual symposium on
virtual mind. Minds
Frank. 1982. "Epiphenomenal qualia." Philosophical Quarterly
Jacquette, Dale. 1989.
Adventures in the Chinese room. Philosophy and Phenomenological Research XLIX(4, June):606-623.
Lyons, William. 1985.
On Searle's "solution" to the mind-body problem. Philosophical Studies 48:291-294.
---. 1986. The Disappearance of Introspection. Cambridge, MA: MIT Press.
G. 1990. Not a trivial consequence. Behavioral and Brain Sciences 13:193-194.
1987. The right stuff. Synthese 70:349-372.
McCarthy, John. 1979. Ascribing mental qualities
to machines. Philosophical Perspectives
in Artificial Intelligence, ed.
M. Ringle. Atlantic Highlands, NJ: Humanities Press.
Mill, J. S.
An Examination of Sir William Hamilton's Philosophy (6th ed.). Longmans
Thomas. 1974. What is it like to be a bat? Philosophical Review
Thomas. 1986. The View from Nowhere. Oxford: Oxford University
Penrose, Roger. 1994.
Shadows of the Mind: A Search for the missing Science of Consciousness.
Oxford: Oxford University Press.
Zombies. A Field Guide to the Philosophy of Mind, ed. M. Nani and M. Marraffa.
Puccetti, Roland. 1980. The chess room: further demythologizing
of strong AI. Behavioral and Brain Sciences 3:441-442.
Hillary. 1975a. The meaning of `meaning'. Mind Language
Cambridge: Cambridge University Press.
Hillary. 1983. Models and Reality. Realism and Reason: Philosophical Papers
Cambridge: Cambridge University Press.
J. 1986. Searle's experiments with thought. Philosophy of Science 53:271-279.
1988. Syntactic semantics: foundations of computational natural
language understanding. In Aspects
of artificial intelligence, ed. James H. Fetzer, 81-131. Dordrecht, Netherlands:
Bertrand. 1948. Human
Knowledge: Its Scope and Limits. New York: Simon & Schuster..
Savitt, Steven F.
1982. Searle's demon and the brain simulator. Behavioral and brain sciences 5(2):342-343.
Schank, Roger C., and Robert P. Abelson. 1977. Scripts, plans, goals,
Hillsdale, NJ: Lawrence Erlbaum Press.
Schank, Roger C. 1977.
Natural language, philosophy, and artificial intelligence. In Philosophical perspectives
in artificial intelligence, 196-224. Brighton, Sussex: Harvester Press.
Searle, John R., and
Daniel Dennett. 1982. The myth of the computer. New York Review of Books 57(24/July):56-57.
1990. "The emperor's new mind": An exchange. New York Review of Books XXXVII(10, June):58-60.
Searle, John R. 1971a.
Speech acts. New York: Cambridge University
1971b. What is a speech act? In The philosophy of language, ed. John Searle. Oxford: Oxford University Press.
---. 1975a. Speech
acts and recent linguistics. In Expression and meaning, 162-179. Cambridge: Cambridge University Press.
1975b. Indirect speech acts. In Expression and meaning, 30-57. Cambridge: Cambridge University Press.
1975c. A taxonomy of illocutionary acts. In Expression and meaning, 1-29. Cambridge: Cambridge University Press.
1977. Reiterating the differences: A reply to Derrida. Glyph 2:198-208. John Hopkins.
1978. Literal meaning. In Expression
117-136. Cambridge: Cambridge University Press.
---. 1979a. What is
an intentional state? Mind LXXXVIII:74-92.
1979b. Intentionality and the use of language. In Meaning and use, ed. A. Margalit, 181-197. Dordrecht, Netherlands:
D. Reidel Publishing Co.
---. 1979c. The intentionality
of intention and action. Inquiry 22:253-280.
1979d. Metaphor. In Expression
76-116. Cambridge: Cambridge University Press.
1979e. Referential and attributive. In Expression and meaning, 137-161. Cambridge: Cambridge University Press.
Cambridge: Cambridge University Press.
1980c. Analytic philosophy and mental phenomena. In Midwest studies in philosophy, vol. 5, 405-423. Minneapolis:
University of Minnesota Press.
1980d. The background of meaning. In Speech act theory and pragmatics, ed. J. R. Searle, F. Kiefer
and M. Bierwisch, 221-232. Dordrecht, Netherlands: D. Reidel Publishing
---. 1982. The Chinese
room revisited. Behavioral
and Brain Sciences
---. 1983. Intentionality:
an essay in the philosophy of mind. New
York: Cambridge University Press.
1984b. Intentionality and its place in nature. Synthese 61:3-16.
1985. Patterns, symbols and understanding. Behavioral and Brain Sciences 8(4):742-743.
1986. Meaning, communication and representation. In Philosophical grounds of
ed. R. Grandy and xx, 209-226.
---. 1987. Indeterminacy,
empiricism, and the first person. Journal of Philosophy LXXXIV(3, March):123-146.
1988. Minds and brains without programs. In Mindwaves,
ed. Colin Blakemore and Susan Greenfield, 209-233. Oxford: Basil
---. 1989a. Reply to Jacquette. Philosophy
and Phenomenological Research XLIX(4):701-708.
1989b. Consciousness, unconsciousness, and intentionality. Philosophical Topics XVII(1, spring):193-209.
. 1989c. How performatives work. Linguistics and Philosophy 12:535-558.
1990b. Consciousness, unconsciousness and intentionality. In Propositional attitudes:
the role of content in logic, language, and mind, ed. Anderson A. and J. Owens,
269-284. Stanford, CA: Center for the Study of Language and Information.
---. 1990c. Is the brain a digital computer? Proceedings of the American Philosophical Association 64(3):21-37.
1990d. Forward to Amichai Kronfeld's Reference and computation. In Reference
by Amichai Kronfeld, xii-xviii. Cambridge: Cambridge University
---. 1990e. The causal
powers of the brain. Behavioral
and Brain Sciences
1990f. Consciousness, explanatory inversion, and cognitive science.
Behavioral and Brain Sciences
1990g. Who is computing with the brain? Behavioral and Brain Sciences 13:632-640.
1991a. Meaning, intentionality and speech acts. In John Searle and his critics, ed. Ernest Lepore and Robert Van Gulick, 81-102.
Cambridge, MA: Basil Blackwell.
---. 1991b. The mind-body
problem. In John
Searle and his critics,
ed. Ernest Lepore and Robert Van Gulick, 141-147. Cambridge, MA:
---. 1991c. Perception
and the satisfactions of intentionality. In John Searle and his critics, ed. Ernest Lepore and Robert Van Gulick, 181-192.
Cambridge, MA: Basil Blackwell.
---. 1991d. Reference
and intentionality. In John
Searle and his critics,
ed. Ernest Lepore and Robert Van Gulick, 227-241. Cambridge, MA:
---. 1991e. The background
of intentionality and action. In John Searle and his critics, ed. Ernest Lepore and Robert Van Gulick, 289-299.
Cambridge, MA: Basil Blackwell..
---. 1991f. Explanation
in the social sciences. In John
Searle and his critics,
ed. Ernest Lepore and Robert Van Gulick, 335-342. Cambridge, MA:
Searle, J. R., J. McCarthy, H. Dreyfus, M. Minsky,
and S. Papert. 1984. Has artificial intelligence research illuminated
human thinking? Annals of the New York
City Academy of Arts and Sciences 426:138-160.
Searle, J. R., K.
O. Apel, W. P. Alston, et al. 1991. John Searle and his critics. Ed. Ernest Lepore and Robert Van Gulick. Cambridge,
MA: Basil Blackwell.
Sharvy, Richard. 1985. It ain't the meat it's the
motion. Inquiry 26:125-134.
Stipp, David. 1991..
Does that computer have something on its mind? Wall Street Journal, Tuesday, March 19, A20.
Stephen. "Solipsism and Other Minds" entry in the
Encyclopedia of Philosophy.
Turing, Alan M. 1936-7. On computable numbers with
an application to the Entsheidungsproblem. In The
undecidable, ed. Martin Davis, 116-154.
New York: Raven Press, 1965. Originally published in Proceedings of the London Mathematical Society, ser. 2, vol. 42 (1936-7), pp. 230265; corrections
Ibid, vol. 43 (1937), pp. 544-546.
---. 1950. Computing machinery and intelligence.
Thomas. 1990. Closing the Chinese room. Ratio (New Series)
1965. Eliza a computer program for the study of natural
language communication between man and machine. Communications of the Association for Computing
1976. Computer power and human reason.
San Francisco: W. H. Freeman.
Wilks, Yorick. 1982.
Searle's straw man. Behavioral
and Brain Sciences
N. 1999. Gottlob Frege. Stanford Encyclopedia of Philosophy.