Once there are minds, it is relatively easy for one to designate an object "X" to stand for another object X. We do this when our children are born or when we name newly discovered things. We let "Colleen" stand for Colleen, "hydrogen" stand for hydrogen, and so on for other things we wish to name, refer to from time to time, and keep track of in conversation or documentation. The public symbols we create or concoct derive their meaning/reference from our thoughts. When thinking about my newly born daughter (Colleen), my wife and I decided to name her "Colleen." But how do thoughts get to be about objects? How do mental states come to contain symbols for things (and thereby mean or refer to things outside the head)? How do thoughts have meaning for concocted symbols to derive? How did my thoughts get to be about my newly born daughter (or about anything else, for that matter) prior to a word of a natural language being chosen or employed to stand for her? It is true that I could think about her as "my daughter" before we named her. But on the view we are considering, that is because words such as "my daughter" express thoughts about my daughter. The question turns, therefore, to how things in the mind come to be symbols that mean things and that can be expressed in a public language. How could thoughts about my daughter come to have a meaning that we express in English as thoughts about "my daughter?" The question about the origin of meaning remains.
This sort of question lies in the background of Fodor’s theory of meaning. If there are some symbols (words of English, say) that derive their meaning from other symbols (thoughts, say), then how do thoughts have their meaning? As we consider Fodor’s theory of meaning, we should bear in mind that he is trying to give a theory of what we may call underived meaning—meaning that does not itself depend on meaning to arise. I’m not sure, but it may help to think about what it would take to bring into existence the first natural—as opposed to supernatural—thought. If correct, Fodor’s theory would provide an answer. He is not trying to provide the answer because he is leaving open that there may be more than one way to create a thought (or something) with meaning.
I say that I’m not sure this helps because Fodor’s theory of meaning does not explicitly require that the first natural symbol to have a meaning must co-occur with the first natural mind. It is fairly clear that he thinks that the meanings of thoughts come first and the meanings of symbols that we use to communicate or express thoughts come second--being derived from the meanings of the thoughts (Fodor, 1987). There is, as it were, mentalese in which we think and with which we learn English, French, German, and the other natural languages in which we communicate. But Fodor (Fodor, 1990c) does not restrict meaning to things with minds or the products of minds (at least there is no clause or background assumption of his theory saying that where there is a meaning there is a mind). I must confess that when I read Fodor (and others who wish to naturalize meaning), I always make the connection between minds and meaning. Indeed, it is sometimes tempting (to me) to make possession of underived meaning a mark of the mental. That is why I suggest that it may be helpful to think about the project of coming up with a set of conditions for meaning in conjunction with the project of building a mind (or at least a thought). However, extreme caution is appropriate here…so be forewarned.
Another matter to bear in mind is that Fodor is interested in what we may call a naturalized account of meaning. That is, he wants an account upon which purely natural (and, perhaps purely physical) objects are such that one of them can mean or refer to another. On a naturalized theory, meaning must arise out of non-meaningful bits. Those non-meaningful bits must be part of the furniture of the world of natural causes and objects. On such a view, when I am thinking about Colleen or about hydrogen, bits of chemical electrical activity in my head is related to the world in such a way that the bits in my head mean or are about these things in the world. If Fodor’s naturalized account of meaning is correct, then a complete description of the physical properties of the events in my head, of Colleen and of hydrogen, and of the relation between me and them would suffice to account for how my thoughts could be about these things. The relata and the relation that constitutes (or at least suffices for) meaning would be purely natural (and, perhaps physical).
So if Fodor is right, meaning is not derived from other things with meaning (the buck stops somewhere), and it is comprised of purely natural (non-meaningful) ingredients (objects and relations). He is offering a set of sufficient conditions for meaning. Fodor is suggesting that if something satisfies his conditions, then it will have a meaning. He is not saying that everything that is meaningful must satisfy his conditions. Nor is he saying that his account captures all there is to having a meaning. He is saying that if his theory is correct and his conditions are satisfied, then one natural object in the world "X" will be about, mean, or refer to another natural object X. (Of course, I am simplifying here. For Fodor also thinks his theory can account for how symbols can take as their content abstract objects, like unicorns. More of this below.)
As we approach a list of sufficient conditions for one thing "X" to mean another thing X, we should address the matter of the physical objects that can be symbols ("X"). Are there any limitations on what can count as a symbol? Must a symbol be an object in the language of thought (LOT)? One might think so, but—recalling my warning above—Fodor does not make this a requirement. One might think that symbols would have to be embodied in a system, not because they get their meanings by their interrelations with one another (not to defend a kind of holism—something that gives Fodor nausea). Rather, one might think this because one might expect some system to the organization of what will stand for what, or how symbols are built out of parts, or because the mind needs to keep track of symbols, their differences, their causal relations to one another, and so on. I think that at one time Fodor did restrict symbols in this way (Fodor, 1987, 1990a), but his revised theory of content (Fodor, 1990c) places no such restriction upon them. What he does say about candidates for symbols is that they are physical objects. Sometimes he talks about their "formal" or "syntactic" properties. However, this is mainly a way to refer to the physical object that is a candidate for being a symbol. We need a way to identify a symbol independently of its content—independently of its being a symbol. In Fodor’s theory, he picks the method of mentioning (and quoting) an object as a way of identifying it independently of its meaning. There are no stated limitations on what kinds of physical objects may be a symbol. Letters appropriately arranged on a page are obviously candidates. "Dog" is clearly a symbol. "ODG" clearly is not (though in an appropriate context it may become one). Neural firings seem to be perfect candidates for the instantiation of mentalese symbols with underived meaning. Electromagnetic switches are able to instantiate symbols in a computer (though these may have only derived meaning—at least so far). And so on.
While there are no limitations placed upon the kinds of physical objects that may become symbols, there are hurdles that must be cleared for something to become one. Prior to actually stating the conditions of Fodor’s theory, an important virtue that any theory of symbols should have is the capacity to be truly or falsely tokened. This requirement for something’s being a symbol clearly differentiates natural signs from symbols. Natural signs, if they signify or indicate anything at all, seem to indicate truthfully. If smoke is a natural sign of fire, it truly indicates fire (there is fire around somewhere nearby, when it occurs). The only way to get a false indication would be to break the law-like connection between smoke and fire. But it is the law-like connection itself that makes the one a sign (indicator) of the other. To break the lawful connection would be to eliminate the one’s being a natural sign of the other (in a laboratory, say, where smoke can be artificially produced). Therefore, symbols, whatever else they are, are more than natural signs. For symbols can be false, unlike natural signs. We can say false things and think false thoughts. The meaning of a symbol remains fixed, whether truly or falsely tokened. The "dogs" in "dogs bark" and in "dogs fly" has the same meaning.
The transition from natural signs to symbols has also been called the jump from "information" to "meaning" (Dretske) or from "natural meaning" to "non-natural meaning" (Grice). Whatever one calls the difference, only symbols that have a fixed content are capable of being falsely tokened. Also, the relation between a tokened symbol and its meaning, after a meaning relation is established, need not be on of natural sign to thing signified. Smoke is a natural sign of fire, but "smoke" is not a natural sign of fire (nor of smoke). I assure you that there is neither smoke nor fire in my immediate vicinity. So symbols are definitely not mere natural signs of their referents or meanings. Also note that my use of "smoke" above is not a false tokening. Following Fodor, we shall call it "robust" tokening (symbols can be tokened by things other than their referents/meanings and not be false).
Another virtue of a successful theory of meaning is that it should solve the "disjunction problem"--a term coined by Fodor (Fodor, 1987, 1990c) for problems he detected in the "Wisconsin Semantics" of Dretske and Stampe. To see this problem, we can turn our attention to the "causal" part of the "asymmetrical causal dependency theory of meaning." A causal theory of meaning traces the meaning of a candidate for a symbol ("X", say) to the things that reliably (lawfully) can cause "X"s to be tokened (Xs, say). Suppose, for example, that one was building an X-detector (I know that I just said this would be only a natural sign, not a symbol, and that it would be derived meaning, not underived, but there is a lesson that can be learned in this nonetheless….bear with me). To build such a thing, one may exploit a lawful correlation between Xs and "X’s". Suppose we succeed, such that when we point our detector mechanism at Xs, it registers an "X" on its screen. Now we would have a detector of Xs, at best. To get false tokening, it may be suggested that we should search for Ys that the detector mechanism will "mistake" for Xs. Perhaps we would look for Ys that had different but similar properties to Xs—properties of Ys that the detection mechanism in our detector would not discriminate from properties of Xs. Suppose we find such Ys that trigger "X"s on our detector mechanism. Would we now have turned the trick, made the jump from a natural sign to a symbol? Would the Y-caused "X"s on our detector mechanism constitute false tokens? Fodor famously (and correctly, though he did not discuss this in terms of signs vs. symbols) rubbed our collective noses in the fact that such a detector’s "X"s would not be symbols for Xs, but still would be merely natural signs. "X"s would be natural signs of Xs or Ys, rather than falsely tokened "X"s in the presence of Ys. Since either an X or a Y is sufficient to reliably produce an "X" in our detector, the best interpretation of what an "X" token could mean or indicate would be X or Y. Now, such a tokening would not literally mean this, of course. A token of "X" would be only a natural sign, but it would be a natural sign of an X or Y, not of an X alone. There lies the disjunction problem. How is it possible to make the jump from signs to symbols? It involves showing how nature can produce a symbol, not merely a sign, that can have univocal, not disjunctive, content, and how that symbol’s content can be falsely tokened. The "disjunction problem" made clear that all of these need to be turned in one trick (in one jump). Fodor set himself the task of searching for a causal theory of meaning that would be able to solve the disjunction problem—and make the jump to meaning.
II. The Asymmetrical Causal Dependency Theory
Fodor’s considered (and revised) theory has gone through several permutations over the years. In this section, I will begin by stating what I take to be his final, considered version of the theory. (This version is not listed anywhere by Fodor in this form, but is culled from Fodor, 1987, 1990c, and 1994.) Later in this section, I will recount another incarnation of the theory (that adds one more condition) and discuss why he probably considered that version and why he probably discarded it later (though it is often very difficult to reconstruct exact Fodor history, even by consulting Fodor personally).
The conditions of the theory are these: "X" means X if:
Condition (1) establishes the nomological type of connection that is needed for something to be a symbol, but it clearly is not enough to make the jump from a natural sign to a symbol (as we’ve discussed above). Naturally, condition (1) must work against a certain background. There must be a story to be told about how and under what conditions objects cause tokenings of symbols. In systems like us, there will be a psychophysical story to be told—Fodor is quite explicit about this (Fodor, 1987). Also, the story is more complex for theoretical terms than for observation terms, but I will not go into all possible permutations here, Fodor, 1987, 1998 [= Concepts, Oxford University Press]. Still, if something is going to be a symbol for X, then there must (according to this theory) be some counterfactual supporting connection between the symbol "X" and its meaning X. If the thing meant (X) can causally produce or token the symbol ("X"), that is a good start. It would be able to do so if there were a law connecting the two—and that is precisely what condition (1) stipulates. In this, condition (1) of Fodor’s theory resembles attempts to naturalize meaning that preceded Fodor’s theory. This condition emphasizes the causal component found in Stampe’s and Grice’s theories. It also resembles the information-based account of Dretske. Were there a perfect correlation between the tokening of "X"s and Xs, then an "X" tokening would carry the information that an X was present. However, condition (1) leaves us short of the jump from signs to symbols and offers no help solving the disjunction problem or accounting for false tokening. It will suffice to say that there exist conditions under which Xs will lawfully produce "X"s. This opens the door to the possibility that "X"s may become dedicated symbols for Xs, but it still takes more to pull this off.
This is where condition (2) comes in. Condition (2) is designed to capture the jump from natural signs to symbols. It is designed to solve the disjunction problem and account for false (and other kinds of robust) tokening. It says that not only will there be a law connecting a symbol ("X") with what it means (X), but also that for any other items that are lawfully connected with the symbol ("X"), there is an asymmetrical dependency of laws or connections. The asymmetry is such that, while it is true that the other items (Ys) are capable of causing the symbol to be tokened ("X"s), the Y "X" law depends upon the Xà "X" law. But for the latter the former would not hold. Were this dependency to exist and were it to be asymmetrical, it is supposed to account for why the symbol "X" is dedicated to representing Xs, not Ys (where "Y" can range freely over any non-X). The asymmetrical dependence locks the symbol to its meaning/referent.
Condition (3) is a "robustness" condition that basically establishes that there are false tokenings or, if not false tokenings, at least there exist connections between a symbol and something other than its meaning. A false tokening is one way for this condition to be met. If I look inattentively at a Sheltie dog and think it is a fox, I falsely token "fox." However, if I hear the expression "sly as a…" and token "fox", that is not a false tokening. Still it conforms to condition (3) because the symbol is caused in me by an English phrase (and memory), not by fox. Still this robust tokening does not corrupt the meaning of "fox."
Condition (4) is designed to circumvent potential problems due to the possibility of kinds of asymmetrical dependence that are not meaning-conferring (Fodor, 1987, p. 109). Consider Pavlovian conditioning. Food causes salivation in the dog. Then a bell causes salivation in the dog. It is likely that the bell causes it only because the food causes it. Yet, does salivation mean food? It clearly is a natural sign, perhaps, but is it a symbol? I doubt that Fodor would want to say that it is a symbol and his condition (4) allows him to block this. For the dependency is diachronic, not synchronic. First there is the unconditioned response to the food, then, over time, there becomes a condition response to the bell. Condition (4) says that the kind of connection Fodor is stipulating for the meaning of a symbol requires that the dependencies be synchronic, not diachronic. Thus, his theory screens off Pavlovian and other diachronic asymmetrical dependencies from the realm of meaning.
That, briefly, is Fodor’s view of meaning. Unfortunately, the theory has changed over the years and it is difficult to know which parts survive the changes, as I will now explain.
III. The Historical Instantiation Condition (HIC)
The above theory is the one that I think survives today (Fodor, 1994). However, earlier versions of the theory were clearly different (Fodor, 1987, Fodor 1990c). The earlier versions clearly relied on an additional historical instantiation condition (HIC):
(HIC): Some "X"s are actually caused by Xs.
The fact that this was once a stated condition (Fodor, 1990c) further complicates matters. For it seems clear that when it is included, we will read the above conditions (1-4) differently than we might without it. With it, conditions (2-4) seem to be conditions on actual instances of causation (not just on counterfactuals) (Warfield, 1994). Further, condition (4) seems only to make sense if we include a condition like (HIC). Without it, what sense would it make to say that dependencies between laws is diachronic (Adams & Aizawa, 1993, 1994a)? Laws are timeless. Without it, conditions (1-2) seem to be only about counterfactuals, not instances of the laws (Fodor, 1994).
(HIC) makes perfectly good sense if one is worried about excluding thoughts for Davidson’s Swampman or worried about twin-earth cases and accounting for the differences of the meaning of "water" here or on twin-earth. The problem is that Fodor now thinks Swampman has thoughts and Fodor has given up worrying about twin cases (Fodor, 1994). I think these things account for his dropping of this condition. Let me explain.
Let’s first consider the meaning of "water." In Jerry, the thought symbol "water" means water (our water, H2O). In twin-Jerry, the thought symbol "water" means twin-water (XYZ). How is that possible? There is an H20à "water" law. But there is also an XYZà "water" law. Since Jerry and twin-Jerry are physically identical, the same laws hold of each. So how is it possible for the meanings (broad contents) of their respective "water" tokens to diverge? It would be possible if one invoked the historical instantiation condition (HIC). For Jerry does not instantiate the XYZà "water" law and twin-Jerry does not instantiate the H2Oà "water" law. Thus, it would be possible to claim that Jerry’s "water" symbol locks to one thing because of actual causal contact with that kind of substance, while twin-Jerry’s "water" symbol locks to another kind of substance via actual causal contact with it. By including the historical instantiation condition, the theory would be able to explain these differences of broad content.
Also, the theory would be able to say why Davidson’s Swampman’s symbols lack meaning. Although the same counterfactuals may be true of him as are true of Davidson himself, but since Swampman has no causal truck with the same objects and properties as has Davidson himself, Swampman fails to satisfy the historical condition (HIC).
Useful though this condition may be, Fodor jettisons it (Fodor, 1994). He now denies that twin-earth problems are ones that a theory of content needs to address (or solve). He also now accepts that SwampJerry has the same thoughts as Jerry. Therefore, Fodor’s considered version of the theory drops this condition. (I will say more about this below.)
IV. Problems, Problems, Problems
In this section, I will consider problems for Fodor’s theory. Fodor noticed some of the problems himself, and has tried to fix them. Some of the problems were noticed by others. In some cases, Fodor has fairly adequate fixes for the problems. In other cases, Fodor is far from having adequate fixes, though he is never at a loss for something to say in reply to a problem.
Let us begin with a problem that Fodor himself noticed and tried to fix (Fodor, 1994, Appendix B), and that was noticed by others as well (Adams & Aizawa, 1992, 1994b, 1997a). How, on Fodor’s theory, can "Aristotle" mean Aristotle? Indeed, how can any name mean any individual? Were his theory not to apply to names of individuals, that would not be horrible. He admits that it does not apply to logical terms and demonstratives, for instance. Still he surely wants it to apply to names. Yet, consider clause (1). This alone tells us that there must be a law relating the man Aristotle to the name "Aristotle". However, it is standardly agreed that individuals, such as Aristotle himself, do not feature in laws. There may be laws about men that apply to Aristotle by virtue of his being a man, but not laws that apply uniquely to Aristotle in virtue of his being Aristotle. Thus, it seems that Fodor’s theory will not account for the meanings of names.
In an attempt to fix this, Fodor at one point suggests that (Fodor, 1994, p. 118) there is a law between the property of being Aristotle and "Aristotle" tokens. If true, I suppose it would be possible to have many instantiations of the Aristotle property (clones, perhaps). If we wanted to name the instantiations "Aristotle1", "Aristotle2", and so on, the original problem would come back. There would be no way to account for the meanings of these names on his theory—unless they all meant the same thing, the property, not the individuals. That the name "Aristotle" means a property seems very unlikely to be correct. "Aristotle" pretty clearly means the guy, the man, the individual, not the property--even when there is only one of him. Now Fodor may want to insist that for every individual, there is a property of being that individual. But if it were this easy for there to be properties, then why would anyone ever have thought that individuals do not feature in laws. There could be tons of laws about Aristotle (or at least, tons that feature his property). I’m betting my money on there being a difference between properties and individuals and that names like "Aristotle" name the individuals and phrases like "property of being Aristotle" name the properties (if there are properties such as the Aristotle property—which I strongly doubt).
Probably what Fodor should say is that his theory is about the meanings of names of properties, and leave it at that. Perhaps a good old-fashioned causal theory of reference will handle names of individuals (maybe even a direct reference theory of a kind that I myself would like and have advocated might do the trick). It would not tarnish Fodor’s theory if there is a division of labor and his handled only kind terms. So while this is a problem for his theory, it may be a problem of over-ambition, not substance. His theory would be in great shape, if this were the only type of problem it faced. Unfortunately, we are just warming up.
X or X-lookalikeà "X"
Next we will consider the challenges to the theory based on twin-earth examples. These began early (Dennett 1987a, 1987b) and continued throughout changes in Fodor’s conditions (Adams & Aizawa, 1992, 1994b, 1997a). If there is an H2Oà "water" law, there is also an XYZà "water" law. Let us suppose that Earthlings and Twin-Earthlings cannot discriminate these substances (as is the custom). How then can Fodor’s theory explain the difference of meaning of "water" for Earthlings as opposed to Twin-Earthlings? For there should be no asymmetrical dependence. Break either law and the other law should go (given failure to detect a difference in the substances).
Dennett (1987b) extends the moral to any lookalikes. How can "X" mean X on Fodor’s theory for any X? What keeps it from meaning X or X-lookalike? (We can always assume that there will be lookalikes.)
Unfortunately, here is where things really start to get messy. For it is at just this point, as noted above, where condition (HIC) might help. If Fodor could appeal to an historical condition, then he could use that to explain why "X" means X and not X or X-lookalike. What "X" meant would all depend on which laws were instantiated for an individual over that person’s history. There may be XYZ, but Earthlings may not come into contact with it (not instantiate the XYZà "water" law). There may be X-lookalikes, but a person may not encounter them, so they may not be relevant alternatives for the meaning of this particular person’s "X"s.
This is the line some of Fodor defenders have taken (Warfield, 1994), but it is no longer a line Fodor himself can use—having rejected (HIC). Fodor no longer takes Twin-Earth puzzles seriously (Fodor, 1994), basically because he doesn’t consider Twin-Earth to be a "relevant alternative" (to use language from epistemology for the same problem). But Dennett would be correct to point out that the lookalike problem is more serious because lookalikes often are relevant alternatives in both epistemology and naturalized semantics. Has Fodor a way out? I’m sorry to say that I don’t think he has. Those who have seen this type of problem in Fodor’s account include a rather long list (Adams & Aizawa, 1992, 1994a, 1994b, Baker, 1989, Cummins, 1989, Godfrey-Smith, 1989, Maloney, 1990, Sterelny, 1990, Boghossian, 1991, Jones, Mulaire, & Stich, 1991, Manfredi & Summerfield, 1992, Pietroski, 1993, Wallis, 1994).
Some of the most well-known examples that unearth the same difficulty, but don’t specifically couch concerns in terms of H20/XYZ are those of Cummins, Baker, and Manfredi & Summerfield. I shall take them in order (of course several of the others cited do so as well—but I cannot discuss them all here).
Cummins (Cummins, 1989) picks mice for his Xs and shrews for his X-lookalikes. Then if there is a mouseà "mouse" law for condition (1) to be satisfied, and a shrewà "mouse" law, for condition (3) to be satisfied, attention turns to condition (2). Is it true that shrewà "mouse" law asymmetrically depends upon the mouseà "mouse" law? Cummins goes through the many ways of explaining why this seems highly unlikely. Of course, Fodor can always dig in his heels and insist that his theory provides only sufficient conditions and that when his theory explains meaning, (2) will be satisfied. The natural question is how (2) is satisfied. Cummins goes through all the relevant moves that do not work if the mediating mechanism is "mousey looks" that both mice and shrews give off. If that is the mediating mechanims to "mouse" tokens, then it looks like there simply will be no asymmetrical dependence. Now there may be other properties besides mousey looks that mediate the laws. And then there may be properties that mice have and shrews lack, but that if mice didn’t have shrews wouldn’t be able to "poach" upon, so to speak, in causing "mouse" tokens. This is what Fodor will need to say if his theory applies us when we mistake a shrew for a mouse. Yet one would want to know why we make these mistakes if there are other properties that mediate the mouseà "mouse" law—properties that shrews lack. I suppose there is always inattention and human frailty that Fodor can rely upon in such cases to explain why (2) is true even though (3) is true.
Baker’s (Baker, 1989, 1991) robot-cat example is perhaps the best known example that exploits this similar worry about condition (2). Her choice for X is cats and for X-lookalikes robot-cats. She imagines Jerry first seeing robot-cats and later seeing real cats and discovering later that he was wrong about cats (thinking that they were not robots). So there are both of the following laws. Robot-catsà "cats" and catsà "cats". Both are instantiated by Jerry. The question is what does "cat" mean for Jerry? Baker strenuously argues that "cat" cannot mean cat (and I think she is right). She also argues that it cannot mean robot-cat (here too, I agree). But she seems to think it cannot mean cat or robot-cat because if it did Jerry could not later say that he was mistaken about cats. I think Fodor’s theory commits him (and he agrees, by the way) to saying that there is no genuine asymmetrical dependence in this case. Now this may present an inconsistent reply to the one he owes Cummins, of course, but it sure looks to me (and to Fodor) as if "cat" for Jerry in this example will mean cat or robot-cat precisely because there is no asymmetrical dependence of the laws. What to say about Baker’s "second-order mistake?" I think that Jerry could certainly say that he was mistaken about cats. For he would not have been wise to the robots. He would not have been wise to there being a disjunctive meaning to his term. That surely counts as a second-order mistake about what kind of term his "cat" term was and what it meant. So this may well get him off the hook. But notice that this escape exploits the fact that the two sets of laws in this example were both instantiated. That is, the escape seems to need the HIC condition that he abandons. Without it, all of our terms "cat" should suffer the same fate, even if we never witness robot-cats. As long as there is such a property, instantiated or not, we would all be doomed to having a disjunctive term "cat." (As I said, this is messy, isn’t it? Nevertheless, I shall press on.)
Manfredi & Summerfield (Manfredi & Summerfield, 1992) realize that a way around the worry raised by Cummins is to exploit multiple mechanisms from X to "X". That way if an X-lookalike shares only a few properties with Xs, there may be plenty of properties of Xs left to support the asymmetrical dependence of condition (2). They try to work around this in the following way. They suggest that Jerry learns the cowà "cow" connection. Then he mistakes horses for cows, instantiating the horseà "cow" connection (Condition 3). They concede the dependency of (2). Now they ask us to imagine that cows alter their perceptible appearances (through evolution or radiation, whatever) so that cows now have none of their prior perceptible properties. They argue that "cow" would still mean cow despite the new failure of (1). That is, since putting a new-look cow in front of Jerry would not evoke a "cow," they say that the cowà "cow" law is broken, yet "cow" still means cow.
I’m pretty sure Fodor would want to agree that "cow" still means cow, even if cows change their looks (smells, sounds, and other appearances), as long as they don’t change their species. A sticking point for him would be (and I suggested this to Manfredi & Summerfield) whether they did in fact break the cowà "cow" law. I ask you, would not a cow (with its old appearance) still cause in Jerry a "cow?" Surely it would (and they would agree). If so, then on what grounds can they claim that the original cowà "cow" law has been broken? It has been blocked or masked, but if you sealed all cows in containers this would just prevent further instances of the law—it wouldn’t break the law. So their changing their appearances might be another way of masking the law (without breaking it). Thus, ingenious as the example is, I don’t think it spells doom for Fodor. It does however, add to the list of worries that must be quashed.
So there are a few good doses of some of the worries about the asymmetry condition (2). We will see that there are still more worries about that condition below, but we will now look at some other types of concerns. In particular, we will consider the worry about uninstantiated properties.
Another concern that Fodor is right on top of is the concern about uninstantiated properties. How can his theory apply to them? This would especially be a concern were his theory still to include condition (HIC), for, as we know, the unicornà "unicorn" law is uninstantiated. Fodor’s first treatment of the meaning of "unicorn" (Fodor, 1990c) attempts to say that his theory accounts for its meaning even if the law is uninstantiated so long there wouldn’t be non-unicorn-caused "unicorn"s unless there were close worlds in which there were unicorn-caused "unicorn"s. Now all of this is highly pretentious of course, and depends on a metric for nearness of worlds that Fodor doesn’t have (Cummins, 1989, Sterelny, 1990, Loar, 1991). What is most frustrating is that when people "call him" on this, Fodor always notes that he can divide and conquer. That is he can say that "unicorn" is a complex term that means horse with a horn. Then he can apply his theory to the meanings of the component parts (Fodor, 1991). So why doesn’t he just do that from the beginning and save us from attempted fixes like the one above? I don’t know. Further, he realizes that he must do something different if the property in question is nomically impossible, such as round square. It is pretty clear that he won’t try to say there is a round squareà "round square" law to give the meaning of the term "round square." He will decompose. So why not do so with unicorn?
His reluctance to use a decompositional strategy for such terms baits others (Wallis, 1995) to come up with terms "gape" for giant ape, or "gant" for giant ant that Fodor cannot employ the counterfactual strategy above to solve. Suppose a giant ant is a nomological impossibility for biological reasons (their legs would crush under their weight, circulation would not be possible, heat transfer problems, and so on). Then there could not be a law giant antà "gant" to give the meaning of "gant." Only the decompositonal strategy would work here, and it seems the best way to respond to the challenges of Loar and Baker anyway. So, I think Fodor should take this way out of this problem. If he does, I don’t think this problem is fatal, but others may be.
Too much meaning (semantic promiscuity)
I mentioned above that I have always associated meaning with minds. In fact, it would be nice to make meaning a mark (if not the mark of the mental). One might expect similar sympathies to lead Fodor to restrict the items "X" that can mean something X to denizens of minds, but he does not. This would especially be expected since he relies on asymmetric causal dependency to differentiate meaningful from non-meaningful items. As many know, this is the type of dependency that, following Larry Wright’s analysis of teleological functions, is often thought to differentiate a structures mere effects from those which are its natural functions. Famously, hearts make heart sounds which infants find calming and hearts also circulate the blood. Larry Wright’s answer to why one is a function of the heart, not the other, is that hearts would not calm infants, but that they circulate the blood. This is exactly the type of asymmetrical dependency (it doesn’t go the other way around) that Fodor’s theory relies upon. Yet one does not want to say (especially not Fodor) that circulation means heart (though it may be a natural sign of one). Fodor can block the attribution to meaning with is conditions (3) or (4), of course. Still, if Wright’s view is even close to correct (and it has seemed at least close to many), there may be a lot more asymmetrical dependency around in the world than Fodor realizes. He seems to think that it is largely restricted to meaning. If it is not so restricted as he thinks, there may be more meaning around than Fodor wants to accept—too much meaning, in fact, for his theory to be true.
Adams and Aizawa (1992, 1994a, 1994b) have pressed this point arguing that Fodor’s theory has the unintended result of making meaning more promiscuous than he intends. Their pigeons-example exploits a strategy of finding such asymmetries in nature that turn out to be ones involving semantics. Pigeons produce pigeon droppings (with a relevant law instantiated). Suppose that scientist also can produce these droppings (chemically qualitatively indistinguishable). Suppose further that the scientists would not be able to do this but that the pigeons do and that this is a synchronic dependence. Were all of these conditions met, droppings should mean pigeons on Fodor’s account (and not merely be natural signs)! However, it is clear that pigeon droppings are not semantically evaluable.
This example may seem too contrived to be threatening, but thanks to an example brought to my attention by Colin Allen, it can be seen to have the exact same structure as an example found in nature. Kudu antelope eat the bark of the acacia tree. Consequently the tree emits tannin that the kudu don’t like. Not only that--the wind carries this down wind to other trees which will emit tannin too. Further, were a human to disturb the bark of the acacia tree, it would emit tannin too. Now all of Fodor’s conditions are satisfied. ((1) Kudu bitesà tannin, (2) human disturbanceà tannin asymmetrically dependent upon Kuduà tannin, (3) human disturbanceà tannin, (4) some instances of (3) occur.) So it sure looks like tannin in the acacia tree is going to mean Kudu on Fodor’s theory. Fodor may wish to accept this attribution, but he would do so only gudgingly, at best. Both examples surely make it look as if the theory is semantically promiscuous.
"X" means X or proximal projection of X
Another issue that many have noted is that Fodor seems to be in danger of having to say that an item of mentalese means something on the retina, rather than the distal object or property in the world that it is supposed to mean. Fodor is on to the problem as well (Fodor, 1990c) and thinks he has a fix. Others are not so sure (Sterelny, 1990, Antony & Levine, 1991, Adams & Aizawa, 1997b). The problem becomes apparent when one considers the mechanisms mediating the Xà "X" law (for any sensory modality and for any "X" and any X). Let’s just consider vision and the proximal projection of Xs on the retina. Why doesn’t "X" mean proximal projection of X, rather than X, on Fodor’s account? Fodor’s answer is that it is because his condition (4) is not satisfied—that is, all the X-caused "X"s are also proximal-projection-of-an-X-caused "X"s, as well (Fodor, 1990c). So there is no robust causation of "X"s.
First, it is not clear that this is true. One clearly might visually hallucinate an X (and thereby suffer a robustly caused "X"). Second, if (4) is not satisfied, this saves him from saying "X" means projection of X, but prevents him from saying "X" means X. Not good! Worse, suppose the thoughts of Xs causes "X"s and X’s cause "X"s only because sensory projections of Xs cause "X"s. Then we have robustness (condition 4), and asymmetrical dependence on projections of Xs (condition 2), and conditions (1) and (3) are satisfied as well. Now it looks as if Fodor’s theory forces us to say that "X" means proximal projection of Xs. This is clearly seen to be problematic if we substitute in "cows" for "X"s and cows for Xs. For then "cow" means proximal projection of a cow (not cow). Not good, indeed. (For more failed attempts to avoid this result than you may think possible, see Adams & Aizawa, 1997b.)
Several have worried that Fodor’s theory, while ingenious, is vacuous. That is, it applies to no actual meaningful items, or if it does yield a meaning, yields the wrong one (Baker, 1991, Seager, 1993, Adams & Aizawa, 1992, 1994a, 1994b). Consider pathological causes. Water may cause "water"s in Janet. So may a blow to the head, an hallucinogenic drug, a brain tumor, a high fever. Thirst may also cause "water"s in Janet and possibly only because water does. Still, the existence of the pathological causal laws violates the asymmetrical dependency condition (2). Yet we all seem subject to such pathologies. So if Janet’s and our "water"s mean water, they won’t mean water via Fodor’s conditions. His theory simply won’t apply to us. If his theory did apply to us, "water" would mean water or ___ (where the blank is filled with a pathological cause)—thereby yielding the wrong meaning, and not one "water" has, for us.
Now I think I mentioned above that things become extraordinarily messy when one considers throwing Fodor’s (HIC) condition back into the mix (Warfield, 1994). It may for a time look as though one can get Fodor out of this type of jam—that the theory is vacuous. However, I will just refer you to the tedious passages of other works that claim to show that the (HIC) condition does not work. HIC offers Fodor no ultimate help in dealing with these problems. Putting a finger in one hole just causes a leak to spring elsewhere (Adams & Aizawa, 1994a, 1994b). Furthermore, since Fodor has now rejected the HIC condition, it is clear that he would not avail himself of such rescues, in any case.
Without the HIC condition, Fodor is going to have to appeal to counterfactuals alone. He will have to say that worlds where water causes "water"s is closer than worlds where pathological causes do. In addition to not giving a metric for measuring such things, this seems clearly false. No world is closer to this world than itself and pathological causes abound here. So we see no way out for Fodor on this one.
Before closing, I cannot resist saying a bit more about the consequences of Fodor’s rejection of HIC (Fodor, 1994, Adams & Aizawa, 1997a). He now embraces the view that SwampJerry has the same thoughts as Jerry. On what grounds? It would be simpler (more aesthetically pleasing). True but his view already has worts (names, demonstratives, logical constants). One may token "X" in the absence of Xs. True, but the current meaningfuls "X"s may depend on past tokenings of "X"s in the presence of Xs. Intuition is strong that Swampman has thoughts. Yes, but intuitions can be corrupted by theory. My intuitions are that meaning has an historical component. So it seems intuitively clear to me that SwampJerry lacks thoughts. At one point Fodor asks: "when you ask Swampman what day it is and he says that it is Wednesday, what explains this if not his thinking it is Wednesday?" (Fodor 1994, p. 117) My answer is the syntactic "today is Wednesday" in his would-be belief box (they would be beliefs if they had content…which they don’t). At another point, Fodor asks why it is more plausible to say that SwampJerry means by "water" H20 and Twin-SwampJerry means XYZ than vice versa, unless they have thoughts? My answer is because if Twin-SwamJerry meant anything by "water" he’d mean what is meant by the most proximate population of believers…and on Twin-Earth where he is they mean XYZ. If Twin-SwampJerry meant anything, that is what he’d mean (but he means nothing). If I’m right, this is another reason to think Fodor’s theory must be wrong. His theory entails that SwampJerry shares all of Jerry’s thoughts.
Which came first: meaning or asymmetrical dependence?
Many authors have doubted whether asymmetrical dependencies generate meaning, as Fodor’s theory requires, or whether it goes the other way around (Seager, 1993, Gibson, 1996, Adams & Aizawa, 1994a, 1994b, Wallis, 1995). I shall close with some reasons why Fodor’s entire project may be fundamentally misguided.
Fodor’s theory requires that the asymmetrical dependencies of condition (2) rest upon a purely non-semantic basis. These asymmetries are supposed to bring meaning into the world, not result because of meaning. If Ys cause "X"s only because Xs do, this must not be because of any semantics facts about "X"s. What sort of mechanism would bring about such syntactic asymmetric dependencies? In fact, given what is likely to be a world where pathological causes are pervasive, why wouldn’t lots of things be able to cause "X"s besides Xs? The instantiation of "X"s in the brain is some set of neuro-chemical events. These are able to be caused naturally by causal interaction with things in one’s environment. There should be lots and lots of natural causes that would be capable of causing such events in one’s brain (and under a wide variety of circumstances). Why on earth would steaks be able to cause "cow"s in us only because cows can cause "cow"s in us (given that "cows" are uninterpreted neural events)? Why, indeed, when the "cow"s must be interpreted non-semantically. Only after the asymmetries does "cow" mean cow (on this theory). There is not first meaning and then asymmetry. But what, then, would explain the required asymmetry? Is it brute? Meaning, for Fodor is not supposed to go that deep.
Often, in explaining the existence of such asymmetries, Fodor relies on the "experts", on their intentions to use terms (Fodor, 1990c, p. 115). But, of course, this won’t do. One cannot appeal to meanings to explain the existence of underived meanings. So where do the underived asymmetries come from? My best guess is that it goes like this: "cow" means cow, "steak" means steak, we associate steaks with cows and that is why steaks cause "cow"s only because cows cause "cow"s. We wouldn’t associate steaks with "cow"s unless we associated "cow"s with cows and steaks with cows. This explanation of the asymmetrical dependency exploits meanings, it does not generate them. Unless there is a better explanation of such asymmetrical dependencies, it may well be that Fodor’s theory is misguided to attempt to rest meaning upon them.