The Language 
of
Thought 
Language of Thought cartoon
  
Back to Home Annotated Bibliography Lawrence Kaye - University of Massachusetts at Boston E-mail  

 

In his (1975) Jerry Fodor offered a bold hypothesis: the medium of thought is an innate language that is distinct from all spoken languages and is semantically expressively complete. So-called "Mentalese" is supposed to be an inner language that contains all of the conceptual resources necessary for any of the propositions that humans can grasp, think or express--in short, the basis of thought and meaning. 

While few have followed Fodor in adopting this extreme hypothesis, some weaker form of a language of thought (LOT) view, i.e., that there is a mental language that is different from human spoken languages, is held by many philosophers and cognitive scientists. As we will see below, however, although it is fairly clear that (some) thought is linguistic, there is no basis for believing in a Mentalese, let alone an innate, semantically complete Mentalese. 

Fodor's LOT hypothesis may be divided into five component theses: 

(1) Representational Realism: Thinkers have explicit representational systems; to think a thought with a given content is to be appropriately related to a representation with the right meaning, e.g., to have the belief that capitalism breeds greed is to have a representational token with the content "capitalism breeds greed" in one's belief box. 

(2) Linguistic Thought: The (main) representational system that underlies human thought, and perhaps that underlies thought in other species too, is semantically and syntactically language-like, i.e., it is similar to spoken human languages. Specifically, this representational system consists of syntactic tokens that are capable of expressing propositional meanings in virtue of the semantic compositionalilty of the syntactic elements. E.g., there are mental words that express concepts (and the like) that can be formed into true or false mental sentences. 

(3) Distinctness: The language of thought is not identical to any spoken language. 

(4) Nativism: There is a single genetically determined mental language possessed by humans, and perhaps (at least partially possessed) by all other thinking species. 

(5) Semantic Completeness: This language is expressively semantically complete--any predicate that we are able to semantically comprehend is expressible in this language. 

After briefly considering the first, widely accepted, thesis, I will turn to the second and review a number of strong reasons for believing it as well. I will then note an alternative to Fodor's hypothesis and proceed to critically examine arguments for Mentalese.
 

Representational Realism 

To be a representational realist is to think that some, many or most mental state attributions that involve apparent content, e.g., the belief that there is no justice in poverty, correspond to mental states that are related to explicit tokens with the expressed content. That is, what makes the above attribution true is that the mind/brain actually possesses a representation that means that there is no justice in poverty. 

Representational realism may be held in regard to theories in cognitive psychology or in regard to commonsense belief-desire psychology. Fodor (1987, Chapter 1) combines both in what he terms the Representational Theory of Mind. RTM endorses representational realism about mental state attributions and also claims that scientific psychology will vindicate commonsense belief-desire psychology by offering theories that describe "causal sequences of tokenings of representations" that make belief-desire explanations true (ibid., pp. 16-7.) (Fodor 1975 contains the same outlook, but there is no explicit discussion of realism about commonsense psychology. And Fodor 1998a, pp.6 ff., contains his most recent summary presentation of RTM.) 

It is possible to be less of a realist than Fodor and still accept some version of the language of thought hypothesis, viz., by being a realist either only about cognitive psychology or only about commonsense psychology. I will not explore such options but simply note that the only variation which would be inconsistent with the subsequent theses would be a representational realism about psychology combined with eliminativism about commonsense psychology where it was thought that the content attributed by yet-to-be-developed psychological theories will be drastically different from the content of belief-desire ascriptions in that it will be non-linguistic. (But why should anyone believe this?) 

The alternatives to representational realism are instrumentalism, eliminativism and non-representational realism. Instrumentalists believe that while mental state ascriptions and explanations are not literally true, they are useful fictions. Dennett (1987) offers a contemporary form of instrumentalism about belief-desire psychology. Paul Churchland has long been the defender of eliminativism about belief-desire psychology (1981) as well as the champion of connectionist methodology, which he (usually) sees as yielding a non-representational psychology/brain science. Logical behaviorism (Ryle 1949) is the traditional form of non-representational realism about belief-desire attributions. And Stich (1983) offers realism about causal/syntactic scientific psychological theories together with eliminativism about meaning. 

Since the considerations involving representational realism are so broad, and since many of them have little specific bearing on the language of thought hypothesis, I will not discuss the issue further, except to note that there is a broad consensus in contemporary analytic philosophy, as well as cognitive psychology, that some form of representational realism must be correct; the main positive consideration in favor of representational realism, as with most realisms, is that a causal realist interpretation of successful explanations (both in scientific psychology and commonsense psychology) provides the most plausible explanation of their success and apparent truth. Not surprisingly, all of the above alternatives struggle mightily with this point.
 

Linguistic Thought 

There are a number of considerations that lend very strong support to the view that thought is linguistic, i.e., that explicit mental representations occur in language-like representational systems. The first three considerations, namely, productivity, systematicity and semantic compositionality, also serve to specify what constitutes linguistic representation. (The initial presentation of the productivity and systematicity argument occurs in Fodor (1987, Appendix.) For a review of these plus a brief discussion of compositionality, see Fodor (1998a, pp. 94-100.) The following discussion makes largely the same points as Kaye (1995a, pp. 92-7.))
 

I. Productivity 

Humans are able to entertain an indefinitely (perhaps infinitely) large number of semantically distinct thoughts. As a simple demonstration, consider the following ways of producing large, perhaps infinitely large, series of thoughts: 

(1) I'm very worried. 

I'm very, very worried . . . 

(2) Place any two names you can think of in: ______ loves ______ 

(3) Conjoin any two thoughts 

(4) There are 99 bottles of beer on the wall. 

There are 100 bottles of beer on the wall . . . 

The obvious explanation of our ability to entertain indefinitely many thoughts is that thought consists of linguistic elements, viz., concepts and sentences, that can be combined in indefinitely many ways to yield this vast array of content-distinct thoughts. (As I will argue elsewhere, I think that this argument has another very important role: it also shows that concepts rather than propositions are the primary semantic units.)
 

II. Systematicity 

There is an apparently linguistic systematic relationship amongst the sorts of thoughts that we are able to entertain. Thus, if someone is able to think that "John loves Mary," we expect them to be able to think that "Mary loves John" as well. And if someone is able to think both "cats make good pets" and "birds are good to eat" then they should also be able to think "birds make good pets" and "cats are good to eat."
 

III. Semantic Compositionality 

As I note in Kaye (1995a), the productivity argument can be met with the response that at least one other representational medium, viz., imagery, is also semantically compositional, e.g., an image of a dog and an image of a tree can be combined to form an image of both, and colors and shapes can be combined to depict indefinitely many things and scenes. Likewise, imagery seems to involve some systematic competence: anyone able to image a red circle and a green square should be able to image a red square or a green circle. 

However, this line of reply doesn't carry much weight given that the vast array of productivity and systematicity examples that available are quite obviously linguistic in nature. This applies to (1-4) above, and to the example with the individual constants "John" and "Mary" and to the "cat" and "bird" example; none of these can be readily expressed in images. Generally, we use language to express the contents of thoughts and it works exceedingly well. And commonsense attributions and explanations often make reference to concepts, which are propositional, that is, linguistic, constituents. As Fodor (1998a, pp. 99-100) states, the semantic compositionality of thought is simply ubiquitous. (He terms this the "best argument" for semantic compositionality, and thus for the linguistic nature of thought.) 

Nonetheless, there has been at least one attempt to avoid accepting the semantic compositionality of thought. (There has been a fair amount of discussion about whether or not connectionist networks exhibit productivity, systematicity and compositionality, but whether they do or not, it seems evident that these are aspects of our thought attributions.) Schiffer (1987, especially Chapters 7-8), while accepting the hypothesis of a Mentalese, has argued that a token reductionist view of thought accounts for its meaning without entailing that there is a compositional semantics for it. (In effect, his view is that while thought is linguistic, language is not semantically compositional. However, since as we've just seen, one of the main arguments for linguistic thought rests on its apparent compositionality, it is worth noting why Schiffer's view is implausible.) As I argue in Kaye (1993a), a token reductionist view fails to explain the contents of thoughts, and is thus inadequate as an account of their semantic nature. To see both Schiffer's view and my criticism, suppose that meaningful messages begin appearing in ant colonies--e.g., suppose that at a given moment, the position of a large group of ants spells out 'STOP MESSING WITH THE ECOSYSTEM'. Schiffer's view amounts to explaining such messages by showing how it is physically possible for the ants to spell out such a message, i.e., how the (easily) physically possible position of each ant's body adds up to the shape that constitutes the message. But, if such bizarre events did occur, the token reductionistic explanation wouldn't satisfy us one bit--it is simply inadequate at explaining the apparent intentionality behind such messages. Thus, the semantic compositionality of thought, and more generally of language, does not seem to be a matter of genuine doubt.
 

IV. Complexity 

As I also mention in Kaye (1995a), many of our thoughts get extremely complex semantically, and language is the only known medium that is able to achieve and thus account for this degree of complexity. As an exercise, choose at random any substantial sentence from Kant's Critique of Pure Reason. (As most readers will know, the book is notorious for containing lots of very substantial sentences.) Do your best to try to find a medium other than language that can express the content of that sentence. You will easily see that images, maps and the like are absolutely hopeless on this score (as are connectionist networks unless they simply instantiate linguistic representations.) So it seems that the only viable hypothesis to account for our ability to entertain thoughts with such complex contents is that thought is linguistic.
 

V. Introspection 

Finally, the simplest, but not thereby an insubstantial, consideration in the support of the linguistic nature of thought is introspection. Occurrent thought seems to occur in the languages that we speak. (Carruthers 1996, pp. 49 ff., does a nice job of expounding this point.) Each of us can verify this for ourselves. Of course, introspection can be mistaken; but this is not a type of introspection that involves theorizing, analysis or emotional ties--awareness of occurrent thought thus appears to be free of the things that tend to get in the way in other areas, e.g., when we introspect our reasons for acting as we did. 

The only thing to add is that this evidence only concerns occurrent thought, and it is clear that there is much more to representation than just occurrent thought; indeed, it may only be a small (albeit often important) part of our representational systems. The introspective evidence about the linguistic nature of occurrent thought seems strong, though limited, in relation to the question of whether though in general is linguistic.
 

VI. Summing Up 

All told, the considerations involving productivity, systematicity, apparent semantic compositionality, complexity and introspection, provide us with simple, direct and very powerful support for the hypothesis that thought is linguistic. However, and this is extremely important to note, none of these considerations, either individually or jointly, support the view that all thought is linguistic. Nor do they support the view that thought occurs in a uniform representational medium. Indeed, introspection tells us that we also have seemingly non-linguistic, imagistic representations. (And I know of no other good argument for the uniform medium view, though it often seems to be assumed in discussions of Mentalese.)
 

Two Hypotheses 

If, then, some thought is linguistic, there must be a language or languages that thought occurs in. One hypothesis, i.e., Fodor's, is that there is a language of thought--Mentalese--that is distinct from spoken languages; to have a(n explicit) linguistic thought is to have a token of Mentalese explicitly formulated in the brain. 

However, there is at least one alternative to the Mentalese hypothesis, and that is the view that linguistic thoughts occur in the languages that we speak, so that the medium for my linguistic thoughts (and perhaps yours as well) is English. This account is in agreement with the first two planks of Fodor's platform, viz., representational realism and linguistic thought, but it repudiates distinctness, nativism and semantic completeness. 

The spoken language hypothesis has limited but strong support from two sources. First, introspection tells us that our occurrent thoughts are in the languages that we speak. Now it may be true that we are actually having occurrent thoughts in Mentalese and just representing (or presenting) these thoughts to ourselves as though they were in our spoken languages. However, unless there is any independent argument or evidence to support this claim, it seems that we should take the data of introspection at face value, viz., as supporting the spoken language hypothesis. 

It is also possible that while occurrent thought does occur in spoken languages, all our other linguistic thoughts (the perhaps vast array of non-conscious states including stored memories and non-conscious processes) occur in Mentalese. However, here the second consideration in favor of the spoken language hypothesis comes into play, namely Ockham's razor: unless there is compelling evidence or reason to postulate something otherwise unknown, we ought to favor economy in our ontology. This is especially true given that it appears that our spoken language knowledge itself involves a lot of computational ability and space, and, obviously, the brain's space and capacity have their limits. 

It would seem, then, that, all things being equal, we should prefer the spoken language hypothesis to the Mentalese hypothesis; strong evidence or arguments are needed to show why we should accept the latter over the former.
 

The Standard Argument: Distinctness, Nativism, and Semantic Completeness 

Until very recently, Fodor's main line of defense of the Mentalese hypothesis was what he now terms "the Standard Argument." (The argument is presented in Fodor 1975, pp. 79 ff. and 1981b, pp. 266-9. He does not call it the Standard argument in these passages.) I will provide a brief exposition and then consider various responses. 

The Standard Argument, if correct, shows that not only is there a language of thought that is distinct from all spoken languages, but also establishes that it is innate (in humans, perhaps in other species too) and that it is semantically expressively complete--it is able to express any concept that humans are able to grasp. 

The argument itself is actually a defense of radical conceptual nativism--the view that most concepts are innate. But this yields an immediate connection with the issue of the medium of linguistic thought once we note that, if language is the main mental medium in humans, then most concepts (certainly all concepts that we can express in spoken language) will be expressible via predicates in the language of thought with the appropriate meanings. 

The heart of the argument, then, is as follows: all available models of concept learning characterize acquisition as a matter of learning to represent the appropriate category through trial and error approximation of a representation of the category. An example will make this clear: suppose I encounter a dog and am seeking to learn a concept that will group it with other like things. (Instead of acquiring the concept DOG I might also manage to acquire ANIMAL or PET or THING THAT BARKS from such encounters, but that's not the issue here.) The standard psychological explanation is that I somehow put together a set of features that I think this thing shares with other like-kinded things (has a tail, furry, barks, etc.) and test to see if such things generally exist in the greater environment. If so, it's a useful concept (perhaps even one of nature's joints--a natural kind) and I've acquired it. Or perhaps I find that while there are many other tailed, furry, barking things about, I have encountered a fairly unique case of, say, "having a bow in its tail," so I delete that from my concept's feature list, and perhaps I also add other features as I investigate, e.g., "normally four-legged." Ideally, at the end of acquisition I've got a set of features that yield the DOG extension--the set of all dogs--and so I've acquired the concept DOG. 

The thing to notice here is that trial and error learning (a staple of associationistic approaches to learning--that's really Fodor's target here) involves something like a representation-of-feature bundles view of concepts, which is to say that acquired concepts are compositions out of representations of features, typically, conjunctions of those representations. But to represent features, one must have concepts of them. So if dogs are normally four-legged, furry, tailed, barking things, then the concept DOG is the composition or conjunction or association of the concepts NORMALLY FOUR-LEGGED, FURRY, TAILED, BARKS. So these concepts must already be present in order to learn DOG. So there must be some unlearned, i.e., innate set of concepts that serve as the composition set for all concepts we can possibly acquire. But this is expressive semantic completeness (Fodor does not use that phrase)--in effect, it just falls out of the standard associationistic view of concept acquisition. (Fodor 1998a now calls this the definitional view of concepts, together with associationism.) 

As Fodor (1981b) puts it, everyone's a concept nativist to some extent; the difference between so-called empiricism and nativism is just the difference between the number of concepts one thinks are complex associations versus primitives. But that's where the second point of the argument comes in: Put simply, associationism has been a colossal failure. Each wave of programmatic attempts at specifying the reductions of ordinary concepts into more primitive concepts has failed, be it in philosophy, psychology or linguistics. DOG does not reduce to any such features, nor, it seems, can we find very many concepts that so reduce--they are few and far between (e.g., PRIME NUMBER, DOE.) Rather, it seems that most ordinary concepts are primitive, which, given the concept acquisition model presented above, is to say that they are innate. 

But, and here's the important thing to see, whether or not conceptual reductions for most ordinary concepts can be found, it is the first point alone that supports the Mentalese hypothesis. Once it is accepted that in order to acquire a concept, one must be able to represent the reducing concepts in a hypothesis, and given that much thought is linguistic, and specifically, that most or all concepts can be expressed in linguistic thought, it follows that a language with predicates whose meanings are the ultimate reducing concepts is required in order to acquire other concepts. (If DOG really did reduce to NORMALLY FOUR-LEGGED, FURRY, TAILED, BARKS, and those in turn are primitive, we'd need a language with predicates expressing those concepts.) That is to say, there must be an innate language of thought with predicates that express the primitive concepts. Since spoken languages are not innate, it follows that each language learner must have a LOT distinct from all spoken languages. Economy suggests the hypothesis that the species has a single LOT, that is used for all concept learning, including and especially spoken language acquisition. To paraphrase Fodor (1975), "you need a language in order to learn a language." That language is Mentalese.
 

Responses 

There have been two broad types of responses to the Standard Argument (I will not consider attempted defenses of associationism that, if successful, would counter nativism without affecting the LOT hypothesis): those that suggest that concept acquisition does not involve representation of (the features of) instances of the to-be-acquired category, and those that, while allowing that acquisition does involve such representation, maintain that concepts can be "built" in some other way than association. The first category divides into those suggesting a causal view, and those suggest a use theory of meaning.
 

I.a Causation 

Both Samet (1986) and Sterelny (1989) have independently sought to counter Fodor' radical concept nativism (and thus, indirectly, the LOT hypothesis) by sketching views of how concepts can be acquired merely through causal contact with instances of the concept. Samet offers the metaphor of infection (though he does not elaborate it in much detail) and Sterelny argues that an externalist theory of meaning allows that mere causal connections with instances (plus some sort of "bootstrapping"--another metaphor that he likewise does not develop) are sufficient for acquisition in many cases. If correct, a casual view (either one) would not only be an alternative to radical concept nativism, but would undermine the first part of the Standard Argument, which is the part that provides the main support for the LOT hypothesis. 

However, it's easy to see that without (a lot of) further elaboration, such views are simply false. Bumping into a kangaroo is not sufficient for acquiring KANGAROO, and it also is possible to see one without acquiring the concept. And all living things are constantly in causal contact with electrons and quarks and the like, yet only a few living things (some humans) acquire concepts of those things. What these and countless similar cases show (see Kaye 1993b, pp. 198 ff.) is that it is not sufficient for a concept learner to just be in contact with instances; she must represent them, and not just represent them, but represent appropriate properties of the instances. But that's just where the point about hypothesis testing comes in in the Standard Argument. Even if (as Sterelny thinks, and this is, of course, controversial) casual connection is a main part of meaning, the casual connection portion of meaning does not account for concept acquisition.
 

I.b Meaning is Use 

It has been suggested that a use theory (or conception--the above phrase has become a sort of undeveloped motto in certain Neo-Wittgensteinian circles) of meaning can meet Fodor's Standard Argument. Specifically, Carruthers (1996, pp. 67 ff.) suggests that knowledge of language is not "propositional knowledge that" but rather it is "knowledge how" or a "skill." That is, concept acquisition does not involve explicit representation as the Standard Argument claims. (Block 1986, pp. 647-8 applies a conceptual role theory to Fodor's argument, but I argue in Kaye 1993b, pp. 200-4, that his response doesn't succeed.) 

However, appeal to a use theory here is just hand-waving unless it is accompanied by an account of what precisely is learned (and how), for the obvious ways of explaining how we learn to use words take us quickly back to just the sort of explicit representations that the Standard Argument turns on. E.g., consider the problem of explaining how we learn to use the word 'dog'. Presumably, one must learn that 'dog' applies to dogs. How can one do this? The obvious way (in a roughly cognitive psychological explanatory framework) would be to have a representation of the property of being a dog; but this is just to say that one already has the concept DOG. Or, perhaps it might be suggested that one learns the skill of applying 'dog' to tailed, furry, barking things. But how can this be accomplished, if not by having the concepts TAILED, FURRY and BARKING? And, moreover, this type of answer would appear to imply that DOG reduces to a conjunction of some set of feature concepts, which it notoriously doesn't, nor does it appear to semantically entail such features. This latter problem applies equally to the suggestion that we learn to infer 'dog present' from 'tailed, furry, barking thing present'. This is not an automatic, apparently semantic inference, in the way that "brown cows are cows" is. Rather, the obvious (cognitivist) explanation is that we draw such inferences because we have knowledge--i.e., representations--of dogs that tells us that these are typical but not necessary features of dogs. 

Merely saying that meaning is use, or a skill, etc. does not provide any sort of response to the Standard Argument unless "use" gets cashed out in a way that clearly leads to plausible explanations of concept acquisition that do not involve explicit representations. (Carruthers 1996 also develops a more elaborate response to other arguments which he thinks the LOT view rests on. See Kaye 1998 for a critical evaluation of this response.)
 

II.a Acquisition Without Reduction 

Here is one potentially workable strategy for countering the Standard Argument: It is tempting to think of a concept as a concrete syntactic mental state, such as a predicate (in an inner language.) However, when we consider the various roles that concepts are supposed to play--they are meanings, involve categorization abilities, and provide knowledge of kinds--then it is more likely that a concept is an abstract type of psychological state that consists of various more concrete representations and processes. (I take it that "concreteness" is at least vaguely determinable in terms of syntactic concreteness or by causal interaction.) Specifically, it may be that possession of the concept DOG involves: 1) perceptual abilities that allow the recognition of typical dogs, i.e., knowledge of typical dog appearances and/or knowledge of typical dog features, 2) knowledge of general ontological categories that dogs fall under (e.g., OBJECT, LIVING THING, MAMMAL or ANIMAL) and 3) knowledge of essential features of dogs, e.g., that they are interbreedable only with other dogs, or that they all have the same type of genes. (This example is very loosely based on some speculative accounts of concepts in cognitive psychology from a few years back--see the Concepts entry for an introduction to the literature.) Notice that these various sorts of knowledge and abilities need not be cognitive localized, but are more likely distributed amongst various faculties such as the perceptual systems, abstract concept system and general knowledge systems (e.g., see the model of cognitive architecture developed in Jackendoff 1987); there is no reason to think that a concept consists of an isolated container for all this information. On the other hand, it is likely that such representations are all fairly closely linked. But, either way, the suggestion is that possessing 1-3 allows us to represent the category of doghood. 

The abstract psychological state view of concepts just offered, or any close variation of it, allows the following reply to the Standard Argument: trial and error-based concept acquisition is not a matter of associating or conjoining already present concepts, rather it is a matter of acquiring various different types of sub-conceptual knowledge (1-3 above) and connecting up this knowledge in the right sort of way, viz., to coherently represent a kind. But there is no reason to think that the representations that get hooked up must be in a uniform medium, such as a language of thought; some may be linguistic representations, others may not be. Thus, there is no reason to think of concept acquisition as explicit hypothesis testing (in a language of thought.) So, this account of concepts does not entail the semantic expressive completeness thesis, for, to put it simply, the Standard Argument's support of semantic completeness rests on the fact that concept learning is nothing more than concept association, but the present account says that concept learning is more complicated than that, involving potentially more diverse elements and potentially more diverse means of connection. 

If the abstract type view is correct, possessing a concept will require possessing various sub-conceptual representational states. It follows that, while grasping or possessing a concept psychologically reduces to possessing those sub-conceptual psychological states, it does not follow that the concept DOG semantically reduces to some set of other concepts. The perceptual aspect (1) involves reference fixing without specifying the full extension--these are sufficient but not necessary conditions, the ontological categories (2) are necessary but not sufficient conditions and the essential feature(s) (3) are necessary and sufficient conditions but they are specified relative to the typical but not necessary features. While this may yield a definition of sorts, it is not apparent that this will qualify as a traditional, reductive definition, e.g., "dog"="the type of mammal (etc.) that is interbreedable with things that typically are four-legged, furry, tailed barkers of such-and-such typical shapes and sizes." On the one hand, it might be maintained that the need for phrases such as "the type of" and "things that typically are" make this a non-reductive definition, in which case this shows how a concept can be semantically primitive and yet acquired, i.e., because it is psychologically complex. Or, on the other hand, this might count as a reductive definition, in which case the present view might be viewed as means of developing more complex definitions (and hopefully more successful) definitions than those of traditional associationism. But, either way, the abstract state view blocks the crucial first part of the Standard Argument. 

Now the just-presented view does suggest that there will be a lot that is innate. The (rough sketch) of the concept DOG cites a good number of representations: perceptual features, ontological features and essential features. So prior possession of all of these sorts of representations will be required in order to acquire the concept. Now, this is not to say that the concepts of all of these types must be present, for recall, on this view concepts are very abstract state types (e.g., it is easy to see how one could be able to detect triangular shapes without possessing the (full) concept TRIANGLE.) But unless there is a story to be told about how such sub-conceptual representations are acquired, the default hypothesis must be that they are innate. 

I take such nativism to be an acceptable, albeit controversial, consequence of a view of concepts, roughly along the lines of Chomsky's hypothesis of very substantial innate resources for language acquisition. However, the above view of concepts also implies that there will be a lot of conceptual truths, e.g., "typical dogs have tails and fur," "dogs are living things," "dogs only interbreed with things that typically have tails and fur." Perhaps Quinean considerations can be induced to show that few if any of these are genuine semantic truths. Or perhaps the analyticity of such statements can be defended. (Part of the defense might involve the point that, on a naturalized view of semantics, we should not expect complete semantic knowledge to be accessible from the armchair, e.g., through intuitions--see Kaye 1995b. The above account of the psychological conditions of concepts is nicely illustrative of that sort of psychologically naturalistic approach to semantics.) 

But if the above view does fail on the conceptual truths score, then there is another possibility--it may be that concepts are indeed abstract state types that supervene on but do not reduce to sub-conceptual representational states (see Kaye 1993b, pp. 206-12 for elaboration.) Specifically, it may be that while each of us has some set of typical features for reference-fixing typical dogs, we do not generally share exactly the same feature set. (Or, though I am highly skeptical of this possibility, it may be that concepts somehow emerge out of connectionist networks; we await a plausible account of the "somehow" part.) But, whatever the reasons for the non-reduction, mere supervenience does not imply the existence of such conceptual truths, thus preserving the virtues of the abstract state view--viz., its ability to answer to the Standard Argument. 

So I suggest that conceiving of concepts as abstract psychological types allows for the development of a view of concepts (and much of the development must ultimately come from cognitive psychology) that blocks the Standard Argument, thus undermining Fodor's main support for the Mentalese hypothesis, and thus leaving the spoken language view of thought as the more plausible alternative.
 

II.b The New Fodor 

In recent writings, Fodor has himself developed a view of concepts that blocks the Standard Argument. (However he still thinks the Mentalese hypotheses is correct, but that its support rests on other arguments (personal communication.)) According to Fodor (1998a, Chapter 6) most concept acquisition (for non-natural kind and non-logico-mathematical concepts) involves getting a stereotype from experiencing a typical instance of the kind. The stereotype then triggers or produces the concept. Three additional points are key here: 1) on Fodor's view of meaning, for p to represent the property P is for p to be in law-like correspondence with P. Fodor calls this "locking"--to have the concept DOG is to be locked, i.e., in law-like correspondence with, the property of doghood. This is not a matter of knowledge, but a matter of disposition, e.g., the ability to label a dog a 'dog'. 2) The metaphysical connection between the instances, locking (i.e., concepts) and properties (i.e., why do dogs trigger the dog concept and not the cat concept?) is explained through the claim that the kind properties in question are nothing more than the property of being what cognizers (e.g., humans) lock to when they have gotten a stereotype of the typical properties of the kind via experiencing (typical) instances of the kind. 3) This, in turn, allows Fodor to claim that the way that the stereotype produces the locking is non-evidential (or at least, need not be evidential); that is, the stereotype does not serve as evidence for determining the extension of the concept. 

This final claim allows Fodor to thwart the Standard Argument; if the relationship between experience and acquisition is not a matter of hypothesis formation and testing, there is no reason to think that the concept, i.e., the locking, has to reduce to some set of more basic concepts, least of all the concepts of the properties in the stereotype. The view thus allows that acquired concepts may be primitive, so this is compatible with the apparent non-reducibility of most concepts. Once again, this undermines the support for semantic completeness, nativism and distinctness, which is to say that it undermines the main support for the Mentalese hypothesis. 

Incidentally, Fodor's new conception of concepts, like the view I suggested above, makes a lot of nativist commitments, but it is not guilty of methodologically supporting the radical concept nativism that the Standard Argument implies. If the specialized ability to lock to a given extension is dormant in a cognizer and is activated by a(n appropriate) stereotype, then this looks like concept nativism. But if there is a general mechanism for producing lockings given more or less arbitrary stereotypes as input, then this looks like a sort of learning. However, either way, the representations of the individual features (Fodor thinks of them as sensory qualities) that make up the stereotypes must also be innate--but this is not exactly radical nativism either (unless Hume counts as a radical nativist!) 

Now, I have an objection to Fodor's new account of concepts: it seems possible that two distinct concepts could share the same stereotype, e.g., the typical features of doorknobs could also be the typical features of doorprojections, since typical doorprojections are doorknobs, but they are not co-extensive, since hooks are doorprojections but they are not doorknobs. And suppose that experiencing typical doorknobs sometimes produces the concept DOORPROJECTION. This would mean that the property of being a doorknob could not be identified with the property that gets locked to when experiencing the typical properties of a doorknob. While this may just be a technical problem, the obvious solution, viz., that in the case of doorknobs but not doorprojections we lock to something that is used to open doors, this makes the concept partly a matter of knowledge and may also invite an evidential explanation of why the stereotype sometimes produces one concept rather than the other (e.g., different sorts of abstraction from the stereotype.)
 

Beyond the Standard Argument 

If either Fodor's new view of concepts or my abstract state (or supervenience) account is correct, then the Standard Argument is undermined. Whether or not one finds either of these views of concepts plausible, note that it is likely that the Standard Argument must somehow be mistaken, for it implies that most of our concepts are innate, and that is manifestly false (see Kaye 1993b, pp. 188-9 for elaboration.) 

Let us then assume that the Standard Argument is indeed mistaken--where does that leave us in regard to the Mentalese hypothesis? As far as I can see, the Standard Argument is the only basis for semantic completeness; however, there are a number of other arguments that have been offered to support the existence of Mentalese either by supporting distinctness or nativism.
 

I. Infants and Animals 

Fodor (1975, p. 56) argues that the fact that very young children and animals (of various ages) think, but do not know any spoken languages, shows that thought does not occur in spoken languages. But here we must keep in mind that the support for human thought being linguistic does not establish the strong thesis that all human thought is linguistic (let alone all thought, per se) but only that some or much of it is linguistic. So it is possible to have non-linguistic thought, and adult humans appear to have some of it, along with many linguistic thoughts as well. So the spoken language theorist may simply argue that infants and animals have only non-linguistic thought. This is not hard to defend, since the strongest considerations in favor of linguistic thought, viz., complexity and general semantic compositionality, are questionable or dubious for infants and animals. (See Kaye 1995a, pp. 102-4 for elaboration.)
 

II. Belief Typing 

Fodor (1981a, 191 ff.) argues that the natural way for the spoken language view to individuate beliefs is by individuating the sentences used to ascribe or express the beliefs, but this produces an incorrect taxonomy. For instance, it distinguishes the belief that John loves Mary from the belief that Mary is loved by John, but this is pretty clearly the same belief. 

This objection can be easily met, though (see Kaye 1995a, pp. 100 ff.), by contesting the suggesting method of belief typing--there's no reason why the spoken language theorist has to get stuck typing by syntax, and mere surface syntax at that. The obvious method that would appear to get the right taxonomy is to instead type by the meanings of belief sentences. If 'John loves Mary' and 'Mary is loved by John' are synonymous, then the above two ascriptions describe the same belief. (It may also be desirable to type by deeper linguistic structure, e.g., LF--see below.) 

The same solution also applies to shared beliefs of speakers of different languages. While the German 'Es regnet' and the English 'It's raining' are obviously different sentences, they express the same meaning and thus, assuming and applying the spoken language view, typing beliefs by meaning will correctly type a German speaker's belief relation to an inner token of 'Es regnet' as the same type of psychological state as an English speaker's belief relation to an inner token of 'It's raining.' (Again, it may be desirable to type by deeper structure, probably synonymous LFs for intra-linguistic cases.)
 

III. Ambiguity 

There are various versions of the ambiguity argument around (cf. Pinker, 1994, pp. 78-9, but see Cole (1998) for a response to this and other pro-Mentalese arguments from Pinker) but the strongest is from Fodor who maintains that linguistic representations must be explicit about their logical forms, i.e., they must not be ambiguous. This is because, he argues, cognitive processes are causal, and mental representations have their causal powers in virtue of their syntactic structure. Thus, syntax must mirror semantics for cognitive processes. But notoriously, it doesn't for sentences in spoken languages, which contain many ambiguous sentences (e.g., "I had the book stolen.") 

While several of the claims in this argument may be contentious, the spoken language theorist can nonetheless accept the conclusion, but propose that we do not think in representations of the surface structure of spoken language sentences, but rather we think in "deeper" structures of the sort that have long been proposed by Chomsky and other linguists, e.g., the LFs of spoken language sentences. Such forms are non-ambiguous, thus satisfying the demand of the above argument. 

Now, it may seem that this supports distinctness after all, for such structures are not what we utter or write in our spoken languages. But on the other hand, this is not the sort of distinctness of Fodor's Mentalese hypothesis, for 1) such structures will not be universal but will vary with the spoken language and 2) the deeper structure will contain many, most or all of the words in the relevant spoken language, albeit presented in a different structural arrangement than the strings of speech and writing. 

Perhaps this is a sort of compromise hypothesis; but it is much closer to the (simple) spoken language hypothesis than it is to the Mentalese hypothesis; I think it is reasonable to regard it as a variant of the spoken language view.
 

IV. Language Learning 

Is there some other way besides the Standard Argument of defending Fodor's claim that "you need a language to learn a language?" I know of no actual arguments to this effect, but I have heard it suggested that contemporary computational linguistics somehow supports the idea of Mentalese, so I will offer a sort of preemptive strike against such considerations. 

Really all that is needed here is a pointer to Chomsky's views (1965, Chapter 1, 1975, chapter 1, 1986); his position all along has not been that you need a language to learn a language, but rather that you need a language acquisition device to learn a language. This was originally described as "universal grammar," which might make it seem as though one must have a language to represent the grammar in, but more recent accounts make it clear that this is not so. The up-dated hypothesis about what must be innate in order to learn spoken languages is that humans have a parameter setting device that sets the logical and syntactic forms of a given language based on sample sentences that the child encounters (Chomsky 1986, p. 146.) For example, the device decides between S-V-O and S-O-V structure. While such a "device" may amount to a very elaborate bit of innate cognitive machinery, there is no reason to think that it must embody knowledge of a language; rather, its operation produces knowledge of languages. So it seems that language acquisition can be explained without postulating Mentalese.
 

Conclusion 

While there are good reasons for believing that much human thought occurs in languages, there is no basis for hypothesizing a Mentalese that is distinct from all spoken languages as the medium of thought. Rather, the best hypothesis is that linguistic thoughts consist of various structures (surface structure in occurrent thought, and a deeper "logical form" for other processes) of our spoken languages. 
 

Appendix: A More Definite Formulation of a Natural Language Thought Theory  

In response to some critical remarks by Fodor (1998b) pp. 64-5, about Carruthers' view that we think in natural languages, and partly too, inspired by Cole's (MS 1997) speculations about occurrent thought (and aided by some very helpful personal communications with both of them), here is a clarified version of some of the sketchy points above about thinking in natural languages.

I propose the following theory of linguistic thought:

1) All high-level linguistic thought occurs in some structure of spoken, natural languages.

Remarks: Non-linguistic thought is imagery. Low-level thought, if there is such a thing, is, as I conceive it, strongly analogous to (actual) machine languages, viz., without any high level content. If we do have a machine language of thought and if it has content at all, it will exclusively concern "machine-like operations, e.g., "move x to location n". (Or, e.g., commands similar to those that programmers employ when modeling connectionistic networks in, e.g., LISP.) How much of our thought is high-level and linguistic is an open empirical matter, but the arguments for linguistic thought--see the tour--suggest that a fair amount of thought is both high-level and linguistic.

2) Conscious, "occurrent" thought occurs in the PF (the technical specification of phonetic forms from linguistic theory) of natural language sentences. Such thoughts receive meanings via assignment of or translation into an LF (the technical specification of logical form from linguistic theory). However, we are not aware of the LF of occurrent thoughts--we do not have a conscious representation of it; we are only aware of the PF structure, qua acoustic image.

Remarks: We are aware that our thoughts have meaning--in effect, aware that they have been assigned an LF (or, better, translated into LF), but the LF structure is not in any sense directly consciously introspectible. We may be able to make guesses about LF based on inferential role, etc., and this is, indeed, what linguistis do when they gather intuitions about various sentences' meanings. (One gathers ones own intuitions just by thinking the sentence--e.g., try to get the three meanings of "I had the book stolen" (Chomsky 1965). What you're no doubt doing right now is thinking the sentence over and over again, trying to come up with three separate meaning assignments--trying to think it meaningfully in three different ways. (Answers: Someone stole it from me, I had someone steal it for me, I took the book but then thought better of it and returned it.) Note that while its reasonable to say that you can be conscious of its having three different meanings, you are not explicitly aware of the meanings--on the present view, this is to say that you are not explicitly, consciously aware of the (presumably) three separate LFs that the sentence might be assigned.)

Note how this account reflects the outlook of Chomsky (1986); thought is not a matter of entertaining sentences in the E-language but rather of entertaining structures in the I-language, which Chomsky (1995) now thinks consist solely of PF and LF structures.

Dennett once suggested that occurrent thought was an inner input into the language comprehension system, and this view is certainly consistent with that idea. That is, language comprehension appears to be a matter of visually or auditorally receiving a PF representation and then parsing it into an LF; the suggestion is that occurrent thought may be a matter of self, inner generation of PFs which are fed into the same parsing system. (High-level) thought might thus amount to creating a series of PF-LFs.

3) Other high-level thought (that is not consciously introspectible) will be either the LFs or the PFs of natural language sentences.

The type of linguistic structure should depend on the nature of the process. The most prominent candidate area for non-conscious high-level thought is reasoning, and here LF seems obvious best suited. It is also worth noting that memory storage may involve both PFs (quotations) as well as LFs (remembering the content of what was said) and, of course, images (so-called episodic memory).

This theory, then, addresses Fodor's (1998b) complaints that appeal to introspection to support a "deeper structure" natural language of thought view is inconsistent with the fact that we aren't aware of LFs and don't have precise or articulate knowledge of them (p. 65). He also makes some smug remarks about occurrent thought having a primarily cheerleading function (p. 64), but most people tend to find that occurrent thought includes a fair amount of judgment and reasoning and that's pretty darned important. (However, it's perfectly fine if it turns out that most important high-level thought is unconscious--unconscious LFs and or PFs on the present view.)
 
 

Annotated Bibliography                                                                                              Top of page