John Searle's (1980a) thought experiment and associated (1984a)
argument is one of the best known and widely credited counters to claims
of artificial intelligence (AI), i.e., to claims that computers do or at least can
(roughly, someday will) think. According to Searle's original presentation,
the argument is based on two truths: brains cause minds, and syntax
doesn't suffice for semantics. Its target, Searle dubs "strong AI":
"according to strong AI," according to Searle, "the computer is not merely a
tool in the study of the mind, rather the appropriately programmed computer
really is a mind in the sense that computers given the right programs can be
literally said to understand and have other cognitive states" (1980a, p. 417).
Searle contrasts "strong AI" to "weak AI". According to weak AI, according
to Searle, computers just simulate thought, their seeming understanding
isn't real (just as-if) understanding, their seeming calculation as-if
calculation, etc.; nevertheless, computer simulation is useful for studying the
mind (as for studying the weather and other things).
Against "strong AI," Searle (1980a) asks you to imagine yourself a
monolingual English speaker "locked in a room, and given a large batch of
Chinese writing" plus "a second batch of Chinese script" and "a set of rules"
in English "for correlating the second batch with the first batch." The rules
"correlate one set of formal symbols with another set of formal symbols";
"formal" (or "syntactic") meaning you "can identify the symbols entirely by
their shapes." A third batch of Chinese symbols and more instructions in
English enable you "to correlate elements of this third batch with elements
of the first two batches" and instruct you, thereby, "to give back certain
sorts of Chinese symbols with certain sorts of shapes in response." Those
giving you the symbols "call the first batch 'a script' [a data structure with
natural language processing applications], "they call the second batch 'a
story', and they call the third batch 'questions'; the symbols you give back
"they call . . . 'answers to the questions'"; "the set of rules in English . . .
they call 'the program'": you yourself know none of this. Nevertheless, you
"get so good at following the instructions" that "from the point of view of
someone outside the room" your responses are "absolutely indistinguishable
from those of Chinese speakers." Just by looking at your answers, nobody
can tell you "don't speak a word of Chinese." Producing answers "by
manipulating uninterpreted formal symbols," it seems "[a]s far as the
Chinese is concerned," you "simply behave like a computer"; specifically,
like a computer running Schank and Abelson's (1977) "Script Applier
Mechanism" story understanding program (SAM), which Searle's takes for
his example. But in imagining himself to be the person in the room, Searle
thinks it's "quite obvious . . . I do not understand a word of the Chinese
stories. I have inputs and outputs that are indistinguishable from those of
the native Chinese speaker, and I can have any formal program you like,
but I still understand nothing." "For the same reasons," Searle concludes,
"Schank's computer understands nothing of any stories" since "the computer
has nothing more than I have in the case where I understand nothing"
(1980a, p. 418). Furthermore, since in the thought experiment "nothing . . .
depends on the details of Schank's programs," the same "would apply to
any [computer] simulation" of any "human mental phenomenon" (1980a, p.
417); that's all it would be, simulation. Contrary to "strong AI", then, no
matter how intelligent seeming a computer behaves and no matter what
programming makes it behave that way, since the symbols it processes are
meaningless (lack semantics) to it, it's not really intelligent. It's not actually
thinking. Its internal states and processes, being purely syntactic, lack
semantics (meaning); so, it doesn't really have intentional (i.e., meaningful)
Having laid out the example and drawn the aforesaid conclusion, Searle considers several replies offered when he "had the occasion to present this example to a number of workers in artificial intelligence" (1980a, p. 419). Searle offers rejoinders to these various replies.
The Systems Reply suggests that the Chinese room example encourages us to focus on the wrong agent: the thought experiment encourages us to mistake the would-be subject-possessed-of-mental-states for the person in the room. The systems reply grants that "the individual who is locked in the room does not understand the story" but maintains that "he is merely part of a whole system, and the system does understand the story" (1980a, p. 419: my emphases). Searle's main rejoinder to this is to "let the individual internalize all . . . of the system" by memorizing the rules and script and doing the lookups and other operations in their head. "All the same," Searle maintains, "he understands nothing of the Chinese, and . . . neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand then there is no way the system could understand because the system is just part of him" (1980a, p. 420). Searle also insists the systems reply would have the absurd consequence that "mind is everywhere." For instance, "there is a level of description at which my stomach does information processing" there being "nothing to prevent [describers] from treating the input and output of my digestive organs as information if they so desire" (1980a, p. 420).1 Besides, Searle contends, it's just ridiculous to say "that while [the] person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might" (1980a, p. 420: cf., Harnad 1991).
The Robot Reply - along lines favored by contemporary causal theories of reference - suggests what prevents the person in the Chinese room from attaching meanings to (and thus prevents them from understanding) the Chinese ciphers is the sensorimotoric disconnection of the ciphers from the realities they are supposed to represent: to promote the "symbol" manipulation to genuine understanding, according to this causal-theoretic line of thought, the manipulation needs to be grounded in the outside world via the agent's causal relations to the things to which the ciphers, as symbols, apply. If we "put a computer inside a robot" so as to "operate the robot in such a way that the robot does something very much like perceiving, walking, moving about," however, then the "robot would," according to this line of thought, "unlike Schank's computer, have genuine understanding and other mental states" (1980a, p. 420). Against the Robot Reply Searle maintains "the same experiment applies" with only slight modification. Put the room, with Searle in it, inside the robot; imagine "some of the Chinese symbols come from a television camera attached to the robot" and that "other Chinese symbols that [Searle is] giving out serve to make the motors inside the robot move the robot's legs or arms." Still, Searle asserts, "I don't understand anything except the rules for symbol manipulation." He explains, "by instantiating the program I have no [mental] states of the relevant [meaningful, or intentional] type. All I do is follow formal instructions about manipulating formal symbols." Searle also charges that the robot reply "tacitly concedes that cognition is not solely a matter of formal symbol manipulation" after all, as "strong AI" supposes, since it "adds a set of causal relation[s] to the outside world" (1980a, p. 420).
The Brain Simulator Reply asks us to imagine that the program implemented by the computer (or the person in the room) "doesn't represent information that we have about the world, such as the information in Schank's scripts, but simulates the actual sequence of neuron firings at the synapses of a Chinese speaker when he understands stories in Chinese and gives answers to them." Surely then "we would have to say that the machine understood the stories"; or else we would "also have to deny that native Chinese speakers understood the stories" since "[a]t the level of the synapses" there would be no difference between "the program of the computer and the program of the Chinese brain" (1980a, p. 420). Against this, Searle insists, "even getting this close to the operation of the brain is still not sufficient to produce understanding" as may be seen from the following variation on the Chinese room scenario. Instead of shuffling symbols, we "have the man operate an elaborate set of water pipes with valves connecting them." Given some Chinese symbols as input, the program now tells the man "which valves he has to turn off and on. Each water connection corresponds to synapse in the Chinese brain, and the whole system is rigged so that after . . . turning on all the right faucets, the Chinese answer pops out at the output end of the series of pipes." Yet, Searle thinks, obviously, "the man certainly doesn't understand Chinese, and neither do the water pipes." "The problem with the brain simulator," as Searle diagnoses it, is that it simulates "only the formal structure of the sequence of neuron firings": the insufficiency of this formal structure for producing meaning and mental states "is shown by the water pipe example" (1980a, p. 421).
The Combination Reply supposes all of the above: a computer lodged in a robot running a brain simulation program, considered as a unified system. Surely, now, "we would have to ascribe intentionality to the system" (1980a, p. 421). Searle responds, in effect, that since none of these replies, taken alone, has any tendency to overthrow his thought experimental result, neither do all of them taken together: zero times three is naught. Though it would be "rational and indeed irresistible," he concedes, "to accept the hypothesis that the robot had intentionality, as long as we knew nothing more about it" the acceptance would be simply based on the assumption that "if the robot looks and behaves sufficiently like us then we would suppose, until proven otherwise, that it must have mental states like ours that cause and are expressed by its behavior." However, "[i]f we knew independently how to account for its behavior without such assumptions," as with computers, "we would not attribute intentionality to it, especially if we knew it had a formal program" (1980a, p. 421).
The Other Minds Reply reminds us that how we "know other people understand Chinese or anything else" is "by their behavior." Consequently, "if the computer can pass the behavioral tests as well" as a person, then "if you are going to attribute cognition to other people you must in principle also attribute it to computers" (1980a, p. 421). Searle responds that this misses the point: it's "not. . . how I know that other people have cognitive states, but rather what it is that I am attributing when I attribute cognitive states to them. The thrust of the argument is that it couldn't be just computational processes and their output because the computational processes and their output can exist without the cognitive state" (1980a, p. 420-421: my emphases).
The Many Mansions Reply suggests that even if Searle is right in his
suggestion that programming cannot suffice to cause computers to have
intentionality and cognitive states, other means besides programming might
be devised such that computers may be imbued with whatever does suffice
for intentionality by these other means. This too, Searle says, misses the
point: it "trivializes the project of Strong AI by redefining it as whatever
artificially produces and explains cognition" abandoning "the original claim
made on behalf of artificial intelligence" that "mental processes are
computational processes over formally defined elements." If AI is not
identified with that "precise, well defined thesis," Searle says, "my
objections no longer apply because there is no longer a testable hypothesis
for them to apply to" (1980a, p. 422).
Beside the Chinese room thought experiment, Searle's more recent presentations of the Chinese room argument (cf., Searle 1984a, 1994) feature - with minor variations of wording and in the ordering of the premises - a formal "derivation from axioms" (1989a, p. 701). The derivation, according to Searle's 1990aformulation proceeds from the following three axioms (1990a, p. 27):
to the conclusion:
Searle then adds a fourth axiom (p. 29):
from which we are supposed to "immediately derive, trivially" the conclusion:
whence we are supposed to derive the further conclusions:
On the usual understanding, the Chinese room experiment subserves this
derivation by "shoring up axiom 3" (Churchland & Churchland 1990, p. 34).
To call the Chinese room controversial would be an understatement. Beginning with objections published along with Searle's original (1980a) presentation, opinions have drastically divided, not only about whether the Chinese room argument is cogent; but, among those who think it is, as to why it is; and, among those who think it is not, as to why not. This discussion includes several noteworthy threads.
Initial Objections & Replies to the Chinese room argument besides filing new briefs on behalf of many of the forenamed replies (e.g., Fodor 1980 on behalf of "the Robot Reply") take, notably, two tacks. One tack, taken by Daniel Dennett (1980), among others, decries the dualistic tendencies discernible, for instance, in Searle's methodological maxim "always insist on the first-person point of view" (Searle 1980b, p. 451). Another tack notices that the symbols Searle-in-the-room processes are not meaningless ciphers, they're Chinese inscriptions. So they are meaningful; and so is Searle's processing of them in the room; whether he knows it or not. In reply to this second sort of objection, Searle insists that what's at issue here is intrinsic intentionality in contrast to the merely derived intentionality of inscriptions and other linguistic signs. Whatever meaning Searle-in-the-room's computation might derive from the meaning of the Chinese symbols which he processes will not be intrinsic to the process or the processor but "observer relative," existing only in the minds of beholders such as the native Chinese speakers outside the room. "Observer-relative ascriptions of intentionality are always dependent on the intrinsic intentionality of the observers" (Searle 1980b, pp. 451-452). The nub of the experiment, according to Searle's attempted clarification, then, is this: "instantiating a program could not be constitutive of intentionality, because it would be possible for an agent to instantiate the program and still not have the right kind of intentionality" (Searle 1980b, pp. 450-451: my emphasis); the intrinsic kind. Though Searle unapologetically identifies intrinsic intentionality with conscious intentionality, still he resists Dennett's and others' imputations of dualism. Given that what it is we're attributing in attributing mental states is conscious intentionality, Searle maintains, insistence on the "first-person point of view" is warranted; because "the ontology of the mind is a first-person ontology": "the mind consists of qualia [subjective conscious experiences] .. . . right down to the ground" (1992, p. 20). This thesis of Ontological Subjectivity, as Searle calls it in more recent work, is not, he insists, some dualistic invocation of discredited "Cartesian apparatus" (Searle 1992, p. xii), as his critics charge; it simply reaffirms commonsensical intuitions that behavioristic views and their functionalistic progeny have, for too long, highhandedly, dismissed. This commonsense identification of thought with consciousness, Searle maintains, is readily reconcilable with thoroughgoing physicalism when we conceive of consciousness as both caused by and realized in underlying brain processes. Identification of thought with consciousness along these lines, Searle insists, is not dualism; it might more aptly be styled monist interactionism (1980b, p. 455-456) or (as he now prefers) "biological naturalism" (1992, p. 1).
The Connectionist Reply (as it might be called) is set forth - along with a
recapitulation of the Chinese room argument and a rejoinder by Searle - by
Paul and Patricia Churchland in (1990). The Churchlands criticize the crucial
third "axiom" of Searle's "derivation" by attacking his would-be supporting
thought experimental result. This putative result, they contend, gets much if
not all of its plausibility from the lack of neurophysiological verisimilitude in
the though experimental setup. Instead of imagining Searle working alone
with his pad of paper and lookup table, like the Central Processing Unit of a
serial architecture machine, the Churchlands invite us to imagine a more
brainlike connectionist architecture. Imagine Searle-in-the-room, then, to be
just one of very many agents, all working in parallel, each doing their own
small bit of processing (like the many neurons of the brain). Since
Searle-in-the-room, in this revised scenario, does only a very small portion
of the total computational job of generating sensible Chinese replies in
response to Chinese input, naturally he himself does not comprehend the
whole process; so we should hardly expect him to grasp or to be conscious
of the meanings of the communications he is involved, in such a minor way,
in processing. Searle counters that this Connectionist Reply - incorporating,
as it does, elements of both systems and brain simulator replies - can, like
these predecessors, be decisively defeated by appropriately tweaking the
thought experimental scenario. Imagine, if you will, a Chinese gymnasium,
with many monolingual English speakers working in parallel, producing
output indistinguishable from that of native Chinese speakers: each follows
their own (more limited) set of instructions in English. Still, Searle insists,
obviously, none of these individuals understands; and neither does the
whole company of them collectively. It's intuitively utterly obvious, Searle
maintains, that no one and nothing in the revised "Chinese gym" experiment
understands a word of Chinese either individually or collectively. Both
individually and collectively, nothing is being done in the Chinese gym except
meaningless syntactic manipulations from which intentionality and
consequently meaningful thought could not conceivably arise.
Searle's Chinese Room experiment parodies the Turing test, a test for artificial intelligence proposed by Alan Turing (1950) and echoing René Descartes' suggested means for distinguishing thinking souls from unthinking automata. Since "it is not conceivable," Descartes says, that a machine "should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as even the dullest of men can do" (1637, Part V), whatever has such ability evidently thinks. Turing embodies this conversation criterion in a would-be experimental test of machine intelligence; in effect, a "blind" interview. Not knowing which is which, a human interviewer addresses questions, on the one hand, to a computer, and, on the other, to a human being. If, after a decent interval, the questioner is unable to tell which interviewee is the computer on the basis of their answers, then, Turing concludes, we would be well warranted in concluding that the computer, like the person, actually thinks. Restricting himself to the epistemological claim that under the envisaged circumstances attribution of thought to the computer is warranted, Turing himself hazards no metaphysical guesses as to what thought is - proposing no definition or no conjecture as to the essential nature thereof. Nevertheless, his would-be experimental apparatus can be used to characterize the main competing metaphysical hypotheses here in terms their answers to the question of what else or what instead, if anything, is required to guarantee that intelligent seeming behavior really is intelligent or evinces thought. Roughly speaking, we have four sorts of hypotheses here on offer. Behavioristic hypotheses deny that anything besides acting intelligent is required. Dualistic hypotheses hold that, besides (or instead of) intelligent seeming behavior, thought requires having the right subjective conscious experiences.. Identity theoretic hypotheses hold it to be essential that the intelligent seeming performances proceed from the right underlying neurophysiological states. Functionalistic hypotheses such as Computationalism (the view that computation is what thought essentially is) hold that the intelligent seeming behavior must be produced by the right procedures or computations.
The Chinese experiment, then, can be seen to take aim at Behaviorism and
Functionalism as a would-be counterexample to both. Searle-in-the-room
behaves as if he understands Chinese; yet doesn't understand: so, contrary
to Behaviorism, acting (as-if) intelligent does not suffice for being so;
something else is required. But, contrary to Functionalism this something
else is not - or at least, not just - a matter of what underlying procedures
(or programming) bring about the intelligent seeming behavior:
Searle-in-the-room, according to the thought experiment, may be
implementing whatever program you please, yet still be lacking the mental
state (understanding Chinese) that his behavior would seem to evidence.
Thus, Searle claims, Behaviorism and Functionalism are utterly refuted by
this experiment; leaving dualistic and identity theoretic hypotheses in control
of the field. Searle's own hypothesis of "biological naturalism" may be
characterized sympathetically as an attempt to wed - or unsympathetically
as an attempt to waffle between - the remaining dualistic and identity
Implementation: An Underlying Issue
Of course, no one holds that programs in the abstract suffice for thought. Though Searle sometimes speaks as if this were the view he is attacking (see, e.g., Searle 1999), we need to distinguish this - straw AI - from views actually held and worth opposing. Perhaps by "Strong AI," then, Searle is best understood to be mean the view that bare implementation suffices. On this "High Church" view (as Hauser 1993a calls it) execution of a right program is all that's required - whatever the processing speed, whatever the external relations (especially causal relations), and whatever the implementing medium (silicon chips, organic neurons, whatever). This much appeal, to the bare fact of implementation, is required just by way of clarification of "Strong AI": an implementation reply would go further. The nub of Searle's would-be case against Computationalism being counterexemplification, the implementation reply - that Searle in the room is not a right implementation (whether there are programs which, when rightly implemented, suffice for thought or not) - underlies several others. This presents the question: What makes something an implementation and what (if anything) beyond that is required for being right (where, "right" means sufficient for thought). Though there is general agreement that implementing a program depends on internal causal processes and contingencies realizedin the machine mirroring (or being isomorphic to) computational transitions and contingencies spelled out abstractly by the program, there is considerable disagreement as to the details and consequences of this. Here, Searle insists that the mirroring exists only relative to our interpretations, making computation causally impotent, and rendering the concept computation unfit for scientific explanatory employment. (Searle 1990c: cf., Searle 1999). Beyond this foundational dispute, there is also disagreement concerning what further conditions (if any) have to be met to make the implementation right.
Speed limits (cf., Dennett 1987, Hauser 1993a) are the least adulterating would-be additions. These are well rooted both in computational theory (reference to time being a key part of what distinguishes computations among formalisms) and in commonsense intuitions about intelligence (evident, e.g., in talk of people being mentally "quick" and "slow"). Though speed limits have a theoretical cost - Turing's proof of the universality of the machines he describes only holds "considerations of speed apart" (Turing 1950, p. 441) - this cost may be counted a benefit insofar as it reconnects with issues of AI proper (cf., Hauser 1997) about whether actual (or forthcoming) machines really are (or someday will be) thinking; insofar, that is, as it reconnects with real experiments developing actual implementations, like SAM, and disconnects from thought experiments invoking impossible ones, like the Chinese room.
Requiring addition of causal (and in particular sensory and motoric) relations to the referents of the "data" processed in order for the processing to be thought is the "robot reply" (1980a, p. 420). While this is a more considerable adulteration than speed limits, it is still in the spirit of computationalism, broadly construed. A thing's computational character (i.e., what program it is executing) depends on its internal causal relations. While adding external causal relations is an amendment, it seems friendly enough. Though it "tacitly concedes that cognition is not solely a matter of formal symbol manipulation" since it "adds a set of causal relation[s] to the outside world" (1980a, p. 420: my emphases), the robot reply is still a reply, not a retraction. "Low Church" though it may be, the robotically augmented theory should still be deemed "computational."
Even appeal to inner causal powers or relations of the implementing
medium besides those which give the implementation its computational
character (or, additionally, its speed) needn't be altogether unfriendly as
long as these elements are supposed to be required in addition to
computation (cf., Rey 1986, p 181). This - extracomputational details left
unspecified - is Searle's anticipated "many mansions reply" (Searle 1980a,
p. 422). Searle, however, appeals to such material details of
"implementation" in rebuttal to Computationalism; not as an amendment, but
as an alternative to it. Searle holds that (unconscious) computation is no
part of any thought (causative) process: "computational operations . . . by
themselves have no interesting connection with understanding. They are
certainly not sufficient conditions, and not the slightest reason has been
given to suppose that they are necessary conditions or even that they make
a significant contribution to understanding" (Searle 1980a, p. 418: my
emphasis). His (1990f) "Connection Principle" according to which there
can't be (nonintrospectably) unconscious computation and his
Wordstar-on-the-wall argument according to which programs are not in
their implementing media but rather "in the eye of the beholder" and
consequently "have no causal powers at all" (1990c, p. 30) can be seen as
attempts to buttress his claims about the causal impotence of computation
(cf., Searle 1992, 1999). But what Searle says to rebut the "many
mansions reply" - that there is no "precise, well defined thesis" here, "no
longer a testable hypothesis" for his objections "even to apply to" (1980a, p.
422) - applies equally to his own ritual invocation of unspecified "causal
powers" of brains as a would-be alternative hypothesis to
Computationalism. Indeed, such rebuttal applies to Searle's would-be
alternative to Computationalism in spades. Searle's chosen "causal
powers," being powers to "give off consciousness" (1990c, p. 30), have no
objective (publicly observable) effects, only subjective (privately
introspectable) ones. Searle's oft repeated brains cause consciousness
"mantra," consequently, as Chalmers notes, "settles almost nothing." It "is
simply a statement of the problem, not a solution": "a real answer requires,"
according to Chalmers, "a detailed theory of the laws that bridge brain and
consciousness" (in Searle 1999, p. 167)
On Searle's view (1) brains noncomputationally cause consciousness and (2) consciousness causes (or otherwise gives rise to) intentionality. The nub of Chalmers' stage (1) complaint (above) is that the "explanatory gap" between computational or indeed any physicalistic sort of explanation and consciousness is one to which Searle's "biological naturalist" account will be equally subject whatever brain mechanisms are supposed to be involved. Consequently, the existence of such an explanatory gap for Computationalism does nothing to support "biological naturalism" over against it. Beyond this, regarding stage (2), Searle has confessed, "The real gap in my account is ... that I do not explain the details of the relation between Intentionality and consciousness." He continues, "I am working on that now" (1991c, p. 181). The fruit of that work seems to be his 1992 discussion of "The Connection between Consciousness and Intentionality" (original emphasis) in his The Rediscovery of the Mind where he proposes "there is a conceptual connection between consciousness and intentionality" such that "[o]nly a being that could have conscious intentional states could have intentional states at all, and every unconscious intentional state is at least potentially conscious" (1992, p. 132). In support, Searle maintains,
"Aspectual shapes" are like Fregean senses or modes of presentation.2 On Searle's account, I can see or think of Venus as evening star or as morning star (under either of these aspects); I may want the water just as water (not as Evian) or as Evian not just as water; etc. Every intentional mental state has such an aspect, and this aspect is what makes the state intentional (about what it's about): sense (aspectual shape) determines reference. A is also dubious ("direct reference" theories reject A); but the detailed account promised concerns B. Searle 1992 argues negatively, at length (though not, I think, to good effect), that no "third-person, behavioral, or even neurophysiological" account "is sufficient to give an exhaustive account of aspectual shape" (p. 157-158: Searle's emphasis). But where is the promised positive account of how consciousness suffices? It is not forthcoming. Searle just says, "it is reasonably clear how this works for conscious thoughts and experiences" (p. 157), and leaves it at that! So, aspects are made of conscious experiences or qualia (somehow) and qualia (sometimes somehow) act as "metaphysical glue" (Putnam 1983, p. 18) sticking thought to things. So much for details.
Debate over the Chinese room thought experiment - while generating considerable heat - has generated little agreement. To the Chinese room's champions - as to Searle himself - the experiment and allied argument have often seemed so obviously cogent and decisively victorious that doubts professed by nay sayers have seemed discreditable and disingenuous attempts to salvage "strong AI" at all costs. To the argument's detractors, on the other hand, the Chinese room has seemed more like "religious diatribe against AI, masquerading as a serious scientific argument" (Hofstadter 1980, p. 433) than a serious objection. Though I am with the masquerade party, a full dress criticism is, perhaps, out of place here (see Hauser 1993a and Hauser 1997). I offer, instead, the following observations about the Chinese room and vicinity.
(1) Though Searle himself has consistently (since 1984) fronted the formal "derivation from axioms," general discussion continues to focus mainly on Searle's striking thought experiment. This is unfortunate, I think. Intuitions about the experiment seeming to be irremediably at loggerheads, perhaps closer attention to the derivation could shed some light on considerable vagary of the argument and attending discussion. Then again, perhaps not. With so many weighty unmet criticisms against it, the least that can be said is that the Chinese Room Argument is hardly "simple and decisive" (cf., Searle 1999). Whether it can further be fairly said of the Chinese Room Argument that "just about anyone who knows anything about the field has dismissed it long ago" as "full of well-concealed fallacies," as Dennett says (in Searle 1999, p. 116), depends on how you count experts. I, for one, have dismissed it and do find it full of fallacies (Hauser 1993, 1997a); though the argument still has defenders (cf., Bringsjord 1992, Harnad 1991). It can, I think fairly, be said, that the Chinese room argument is a potent conversation starter, and has been a fruitful discussion piece. Discussion of the argument has raised and is helping clarify a number of broader issues concerning AI and computationalism. It can also, I think fairly, be said that Searle's arguments pose no clear and presently unmet challenge to claims of AI or Computationalism, much less, as Searle insists, "proof" ( Searle 1999, p. 228) against them.
(2) The Chinese room experiment, as Searle himself notices, is akin to "arbitrary realization" scenarios of the sort suggested first, perhaps, by Joseph Weizenbaum (1976, ch. 2), who "shows in detail how to construct a computer using a roll of toilet paper and a pile of small stones" (Searle 1980a, p. 423). Such scenarios are also marshaled against Functionalism (and Behaviorism en passant) by others, perhaps most famously, by Ned Block (1978). Arbitrary realizations imagine would-be AI programs to be implemented in outlandish ways: collective implementations (e.g., by the population of China coordinating their efforts via two-way radio communications) imagine programs implemented by groups; Rube Goldberg implementations (e.g., Searle's water pipes or Weizenbaum's toilet paper roll and stones) imagine programs implemented bizarrely, in "the wrong stuff." Such scenarios aim to provoke intuitions that no such thing - no such collective or no such ridiculous contraption - could possibly be possessed of mental states. This, together with the premise (generally conceded by Functionalists) that programs might well be so implemented, yields the conclusion that computation, the "right programming," does not suffice for thought; the programming must be implemented in "the right stuff." Searle concludes similarly that what the Chinese room experiment shows is that "[w]hat matters about brain operations is not the formal shadow cast by the sequences of synapses but rather the actual properties of the synapses" (1980a, p. 422), their "specific biochemistry" (1980a, p. 424). (See above.)
(3) Among those sympathetic to the Chinese room, it is mainly its negative claims - not Searle's positive doctrine - that garner assent. The positive doctrine, "biological naturalism," is either confused (waffling between identity theory and dualism) or else it just is identity theory or dualism. (cf., Searle 1992). (See above.)
(4) Since Searle argues against identity theory, on independent grounds, elsewhere (e.g., 1992, Ch. 5); and since he acknowledges the possibility that some "specific biochemistry" different than ours might suffice to produce conscious experiences and consequently intentionality (in Martians, say), and speaks unabashedly of "ontological subjectivity" (see, e.g., Searle 1992, p. 100), it seems most natural to construe Searle's positive doctrine as basically dualistic, and more specifically as a species of "property dualism" such as Thomas Nagel (1974, 1986) and Frank Jackson (1982) have espoused. Nevertheless, Searle frequently and vigorously protests that he is not any sort of dualist. He protests too much. Refusing to call it "dualism" and inveighing against the very categories of materialism (on the one hand) and dualism (on the other) - the "deny the name stragegy" (cf. Searle 1992, ch. 1; Hauser 1993a, ch. 6) - does not change the basically dualistic character of Searle's views. Searle claims to "give a coherent account of the facts about the mind without endorsing any of the discredited Cartesian apparatus" (Searle 1992: 14), yet his "biological naturalist" account affirms (1) the essential ontological subjectivity of mental phenomena ("the actual ontology of mental states is a first-person ontology" (p. 16)) and its correlative "Connection Principle" that "[b]eliefs, desires, etc. . . . are always potentially conscious" (p. 17); (2) a distinction between true and as if mentality ("between something really having a mind, such as a human being, and something behaving as if it had has a mind, such as a computer" (p. 16)) such as Descartes deploys to deny animals genuine mentality, which Searle redeploys (with similar intent) against computers; (3) a methodological principle of privileged access according to which "the first-person point of view is primary" (p. 20); (4) a distinction between primary ("intrinsic") and secondary ("observer relative") properties; and, perhaps most notably, (5) a Cartesian ego, i.e., "a `first person' an `I,' that has these mental states" (p. 20). Searle even dots the "`I'"; as he should.
Searle's "`I'" is no more identifiable with body or brain than Descartes': neither every property had (e.g., a grayish color), nor every event undergone (e.g., hemorrhaging), nor even every biological function performed by a brain (e.g., cooling the blood) is mental. Nor would it avail - it would be circular, here - to say, "thoughts are subjective properties of brains": it is precisely in order to explicate what it is for a property to be subjective that Searle introduces "a `first person' an `I' that has these mental states." Given his acceptance of this and all the rest, it's hard to see what it is about Cartesian dualism - besides the name - Searle thinks "discredited."
(5) If Searle's positive views are basically dualistic - and, connectedly, if the Chinese room argument is basically about consciousness as so many friends (cf., Harnad 1991, Bringsjord 1992) as well as critics (cf., Chalmers 1996, Dennett 1997, Hauser forthcoming) of the Chinese room argument alike recognize - then the usual objections to dualism apply (see Hauser 1993a, ch. 6), other-minds troubles among them. The "other-minds" reply can hardly be said to "miss the point." Indeed, since the question of whether computers (can) think just is an other-minds question, if other minds questions "miss the point" it's hard to see how the Chinese room speaks to the issue of whether computers really (can) think at all (cf., Searle 1992, Hauser 1993b).
(6) Confusion on the preceding point is fueled by Searle's use of the phrase "strong AI" to mean, on the one hand, computers really do think, and on the other hand, thought is essentially just computation. Even if thought is not essentially just computation, computers (even present-day ones), nevertheless, might really think. That their behavior seems to evince thought is why there is a problem about AI in the first place; and if Searle's argument merely discountenances theoretic or metaphysical identification of thought with computation, the behavioral evidence - and consequently Turing's (1950) point - remains unscathed. Since computers seem, on the face of things, to think, the conclusion that the essential nonidentity of thought with computation would seem to warrant is that whatever else thought essentially is, computers have this too; not, as Searle maintains, that computers' seeming thought-like performances are bogus. Alternately put, equivocation on "Strong AI" invalidates the would-be dilemma that Searle's intitial contrast of "Strong AI" to "Weak AI" seems to pose:
To show that thought is not just computation (what the Chinese room - if it
shows anything - shows) is not to show that computers' intelligent seeming
performances are not real thought (as the "strong" "weak" dichotomy here
suggests). (cf., Hauser 1997, forthcoming).
2. A Fregean sense, roughly characterized, is the manner in which one conceives of or picks out the referent of a word or other linguistic expression. For instance, "The expressions `4' and `8/2' have the same denotation but express different senses, different ways of conceiving the same number. The descriptions `the morning star' and `the evening star' denote the same planet, namely Venus, but express different ways of conceiving of Venus and so have different senses. The name `Pegasus' and the description `the most powerful Greek god' both have a sense (and their senses are distinct), but neither has a denotation" (Zalta 1999).