by Frank Keil
Department of Psychology
One of the most inflammatory issues in current cognitive science appears to be nativist vs. empiricist approaches to knowledge acquisition. I say " appears" because so often the debaters seem to be talking past each other, arguing about different things or misunderstanding each other in such basic ways that the debates can seem to an observer as incoherent. For these reasons there has been a powerful need for a systematic treatment of the different senses of nativism and empiricism that considers both their historical contexts and their current manifestations. Cowie's book offers such a treatment, one that goes far beyond prior attempts. It is a remarkably clear and insightful exposition and critique of nativist views from earliest writings to the most current debates. It helps all of us understand better what others are talking about when they don't subscribe to our brand of nativism or empiricism. It also reveals just how much theoretical and empirical work needs to be done before we can get a clear handle what is really the truth about the innateness of language, mathematics, folks psychology, and many other potential domains. Yet, despite these powerful virtues, the book also falls short on some key issues that seem necessary to laying an agenda for future empirical or theoretical work on nativism. I will tend to focus in this essay on those missing links, while also repeatedly stating that this book represents a major leap forward in making sense of what it means to say that some aspect of the mind is innate.
Mystery and Modularity
It is no news that one view of nativism relies on domain specificity. As Cowie points out in a thoughtful recounting of the evolution of Chomsky's ideas, the notion of a domain specific learning faculty became very clear in his writings from the mid 1960's onwards. By 1976 it was known as "task specificity" when Osherson and Wasow (1976) in a "methodological note" made explicit the fact that everyone thinks that something must be innate in the child for her to learn language since the child's doll or pet doesn't. The debate centered around whether what was special to the child that enabled her to learn language was tailored specifically for language or was more general. A few years later, I attempted to broaden the approach to all kinds of knowledge acquisition by using the term "domain specificity" and extending the notion of constraints in linguistic theory to the idea of domain specific constraints on knowledge (Keil, 1981). Different specialized systems of learning seemed to have several advantages over a less specifically constrained all purpose one. With the rise of "evolutionary psychology", there has now been much made of the need for an 'adapted mind', and the evolution of domain specificity (Cosmides and Tooby, 1994).
But despite this point of view and its almost ancient status in the new discipline of 'cognitive science', the notion of nativism has remained murky and confusing in the literature. Cowie's book helps us understand why. Cowie attributes much of the confusion about nativism to an ambiguity in which it stands either for domain specificity or a mysterious and perhaps unfathomable process. Not surprisingly she rejects the mystery view and argues for the coherence of the domain specificity view, eventually leaning towards the view that language probably is innate in the domain specific sense. Her rejection of the mystery view becomes the primary way of criticizing Fodor's (1981, 1998) nativism as well by arguing that there is no real mystery involved and that plausible psychological mechanisms can explain the origins of concepts.
Most concepts do not have simple definitions but do seem to be combinable in highly lawful ways. Fodor has used these two points to argue for no internal structure to concepts and thereby make all concepts acquired in the same manner as "red' is acquired. The acquisition occurs through a mysterious manner that, as Cowie notes, is really not in the bounds of psychology at all and bears striking similarities to much older, nearly mystical or religious accounts.
Cowie's way out is to argue that concepts are not all of one type and that, in addition, each concept might have several different aspects, such as prototypes, definitions, and routes of access to experts. Different parts might fill different roles such as in conceptual combinations, concept acquisition, and categorization. Thus, while agreeing that prototypes do not play a role helping concepts compose, Cowie argues that prototypes might well be a part of concepts that fixes reference. But how are we to know that the prototype is still part of the concept proper as opposed to an "identification heuristic" that is not at all part of the conceptual core? This question has haunted the field for many years (Armstrong, Gleitman and Gleitman, 1983), and yet Cowie passes over it lightly as if it were not a big deal. But it is a big deal. How does the prototype part fit with the part that supports composition? And, perhaps most important, why does it frequently fail and get overridden by other 'theory laden' parts (Murphy and Medin, 1985, Keil, 1989)? To many observers, being black correlates perfectly with being a videocassette and should thereby be absolutely central to its prototype, being a maximally typical feature. Yet, the first red videocassette we see is just fine as a videocassette, whereas the first spherical one is much more problematic. Prototypes, at least as they are normally conceived as statistical summaries of feature occurrences and co-occurrences, are woefully inadequate to explain much of what we do with concepts. Their structural richness is surely associated with concepts; but what makes them parts of the concepts proper as opposed to looser affiliates? Prototypes or some other kind of statistically tally, may indeed be a proper part of all but a few concepts; but detailed accounts of how that part relates to parts that serve compositionality and other aspects of thought need to be specified before we can make much progress understanding what concepts are and where they come from (Keil, et. al. 1998).
It is unfair to burden Cowie with the task of fully explaining what concepts are. However, by going into Fodor's views in such depth, it may also be unfair of her to dismiss him by simply saying that concepts are heterogeneous and richly structured in ways that get around Fodor's concerns without telling us more about how that structure works.
Cowie limits her discussion of Fodor and concepts to the mystery version of nativism, and this is unfortunate. Despite having a clear interest in domain specificity and nativism in his discussion of modularity, Fodor refuses to extend it to concepts because of concerns about meaning holism. Cowie also considers domain specificity largely with respect to natural language grammar, skirting the meaning holism view and questions about the domain specificity of conceptual knowledge. While it is true that domain specificity emerged most clearly in Chomsky's writings about grammar, domain specificity may be most interesting in terms of whether it can extend to sets of natural conceptual domains. Indeed some of the more classic discussions of nativism as domain specificity certainly meant it to be relevant to concepts as Cowie nicely illustrates in her historical review; yet aside from a few comments mentioned below, she fails to return to that idea in discussing contemporary cognitive science.
Why Domain Specificity Isn't Enough to Stake Out Even One Nativist Position
Cowie rightly points out that domain specific constraints might be necessary to guide learning of something like language, but might still be compatible with an "enlightened empiricism", in which they are acquired and then guide further learning. But even when domain specificity is built in, it is not in itself a nativist view. No one denies that almost all animals, and certainly humans, have domain specific structures and processes tailored for certain kinds of information. They are called sense organs. The eye has structures that are tailored to pick up information in the "domain" of light, while the ear is so tailored for sound, and the nose and mouth for arrays of volatile and non volatile chemicals. If domain specificity merely means the possession of structures and processes that favor the acquisition of certain kinds of information over others, then it isn't enough to distinguish nativism from empiricism.
The critical additional issue is how far "up stream" domain specific specializations exist as we consider the flow of information from sensory transducers themselves "up" to the highest levels of cognition. Even the empiricists probably don't limit such specializations to the sensory transducers themselves, such as the rods and cones of the eye. They would freely acknowledge the complex circuitry of the retina and how it leads to certain perceptual phenomena, such as Mach bands. Many contemporary empiricists would grant that there are specializations all the way up to V1 in the visual cortex, that are specialized , for example, for binocular fusion of images from the two eyes and which therefore are tailored for depth related information. Similarly, the categorical perception of color by new born infants is taken by everyone as evidence for domain specific structures and processes tailored for the "domain' of color.
Go a little higher to "mid -level vision" where there seem to be distinct structures and processes that are tailored for coding surfaces and objects and dealing with occlusion. Some argue that this is the first level at which attention can be deployed and at which the information being processed is available to conscious awareness (Nakayama, He, and Shimojo, 1995). At this point, things get a bit trickier. If there are specialized circuits for picking out bounded physical objects, is that too high a level of specialization for the empiricist? Move on to high level vision and consider the case of face perception and we are at a point where Cowie and many others see a clear divide between nativists and empiricists. Certainly cognitive science is now hotly debating whether we have a specialized face processing area or a more general all purpose area that supports expert pattern recognition (Allison, Puce, Spencer, and McCarthy, 1999; Gauthier, Tarr, Anderson, Skudlarski and Gore, 1999; Kanwisher, 1998).
With faces too, however, the issue becomes more intricate as details are considered. Johnson and Morton (1991), for example, take an empiricist view of the development of the ability to perceive faces. It is an enlightened empiricism since it allows for ultimately a domain specific face processing system even with its own dedicated chunk of neural tissue that, when injured, could cause a selective face processing deficit, or could light up selectively in fMRI studies. Yet, Johnson and Morton, tell the developmental story by also building in what seems to be a little domain specificity from the start. Since newborn humans do prefer to track coherent faces to scrambled ones, something is needed from scratch. Johnson and Morton grant the new born something like a 3 blob inverted triangle detector corresponding to eyes and mouth that enables the infant to "lock onto" faces. It thereby shunts face related information to another brain region that has no prior predispositions to process face like information but which only receives information about faces and thereby comes to be specialized through connectionist wizardry.
Why is such a story benign empiricism and not nativism? Is it because the domain specific processing is not sufficiently psychologically complex? Cowie seems to use such a criterion in her discussion of Fodor, where she happily grants the likelihood of upside down triangle systems serving as face detectors and "squiggle" systems serving as snake detectors because they are not psychologically complex and therefore apparently compatible with empiricists. (She also grants in the same passage innate systems for thinking about mothers, agents, food and cause, only holding the line at not allowing innate systems for doorknobs or platypuses. I don't want doorknobs being innate either, but saw the granting of moms, agents and cause as going far beyond upside down triangles for faces and squiggles for snakes and not at all friendly to empiricists.)
Domain specificity therefore seems to distinguish the one sense of nativism from empiricism only when the kinds of things that are domain specific are not simply sensory transducers. Cowie suggests that they might have to be psychologically complex. What then is psychological complexity? Number of processing steps? Level of abstraction of things computed? Cognitive penetrability? All of the above and more? Complexity is never easy to define in absolute terms and it doesn't help much to make the metric a psychological one. Suppose for example, we discover that the retinal ganglion cells have far more processing power than we had previously thought. Perhaps they act like sophisticated neural nets in how they process edges. They just might, but that isn't the kind of psychological complexity Cowie means. Suppose, conversely that "high" up in the cortex, far beyond various sensory projection areas, we find a very simple computational circuit for detecting mechanical causation. When bounded objects (as indicated by mid level vision) are seen to temporally interact in a way where velocity reductions in one precede immediately velocity increases in another, the circuit fires and the concept " mechanical cause" or "x launching y" springs into consciousness. I have no idea if such a circuit is, or could be, simple but it doesn't seem implausible that it might be simpler than some retinal circuit or one in the lateral geniculate nucleus. Moreover, there are reasons to believe that infants might well perceive/conceive such events from the start as causal (Leslie, 1995).
But surely having an innate system for mechanical causation be innate would upset Hume and does upset many current cognitive scientists who say they are empiricists. Even if the computational processing was quite simple, they would reject it. There seems to be a notion of not allowing domain specificity to go too far upstream regardless of its psychological simplicity or complexity.
There seems to be a continuum of information processing that runs from circuits in the retina tailored to certain light patterns to circuits in the prefrontal cortex tailored to such things as agents, living kinds, and other minds. Domain specificity in at least one sense could exist all along and could be either innate or acquired. At the two extremes, everyone seems to know who is who. Empiricists aren't bothered by retinal circuits but would be bothered by a built in circuit that was tailored for thoughts about other minds, especially if that circuit was more than the result of a low level shunt triggered by a perceptual bias for things whose pupils "looked at you" when you moved or made noises.
We need to explore the idea of a sensation to knowledge continuum more carefully and understand where empiricism and nativism come in. It is a complex task, since neuroscience increasingly reveals that the neural circuitry for processing of information from sensory transducers upwards is full of enormous feedback pathways that go back to lower levels. As the feedback loops become richer and richer and more interconnected, the notion of upstream becomes less and less obvious. I am still optimistic that we can tell a story of what upstream means and get a clearer idea of what boundaries empiricists will not cross; and I hope that psychological complexity can be clarified in ways that may further illuminate the contrast. But there is much work to do before we can give domain specificity much traction.
Current debates over the nature of autism offer a powerful example of the need for clarity. Many believe that autistic children are innately deficient in a "theory of mind" (Frith, 1999). A domain specific folk psychology is assumed to be missing, or at least impaired, in autistic children and they therefore have to learn about other minds by using a domain general learning system that just isn't good enough for the task and results therefore in a huge deficit. Empiricist leaning researchers do not like this view and try to argue against it by pushing any specializations "down stream" to deficient simple perceptual mechanisms that somehow cascade into catastrophic cognitive deficits or by severely dumbing down the cognition involved (e.g. Devlin, Gonnerman, Andersen, and Seidenberg, 1998 ; Farah and McClelland, 1991). But as each side gets into the intricacies of what really is missing in the newborn autistic infant, it becomes unclear what they hold as sacred without a more principled account of levels and kinds of domain specificity. A theory is needed of what is downstream and dumb and upstream and smart.
Such a theory may also help us get clarity on what counts as a legitimate domain. It seems more than any pattern of information in the world for which the body has a specialization. Surely innate allergies to poison ivy do not mean we have domain specific constraints on learning about poison ivy, even if those allergies result in all of us learning about poison ivy differently from daisies. But move to innate preferences for sweet substances and it becomes a bit more plausible to argue for domain specific learning about sweet things. Presumably such an account would go through if the perception of sweetness was somehow linked to higher level processing about sweet things, perhaps involving notions of objects, food, and the like. Understanding domains may go hand in hand with the development of an account of levels of information flow and psychological complexity.
Cognizing Cognitive Constraints
Chomsky used to talk about "knowing" a grammar, but as Cowie notes, under repeated protests from philosophers, he shifted to the thoroughly unsatisfying term "cognizing" instead. Why does any one care? A big part of the reason is because some ways of being mentally linked to a grammar seem much less plausible than others. If for example, cognizing means that we are talking about implicit rather than explicit knowledge of a grammar, it doesn't seem as painful to grant that kind of knowledge to an infant. In today's cognitive science parlance "implicit" usually means unconscious and " explicit " conscious . It is also normally assumed that explicit cognition is more like sentences in the head and contains symbols and rules while implicit cognition is more probabilistic, automatic, and probably amenable to connectionist modeling (Sloman, 1996; xxx). Chomsky's grammar as implicit knowledge would be an exception to this pattern if it were full of symbols and rules yet out of awareness. However, that grammar seems to be migrating away from rich sets of rules to minimalist structure whose epistemological status is even harder to discern.
All this matters because it returns us to the kind of domain specificity that is relevant to the nativist. Nativists do not usually argue that most specific beliefs are innate at least partly because they seem to be revisable. Perhaps the belief that parallel lines never meet is innate and perhaps the belief that causes must precede effects is another. If we are born with mentalese, it may have both a set of primitive concepts and a small set beliefs about those concepts stated in mentalese, that is assuming that some notion of analytic and synthetic statements could be preserved. But no one, including Fodor, thinks that the rules of grammar are stated in mentalese. They are latent somewhere else. Chomsky used to talk a good bit about innate constraints on grammar and that way of talking still strikes me as one of the most useful ways of understanding domain specificity.
Whether one is talking about a hypothesis testing system or a three layered connectionist net, it is possible to guide learning in such architectures by domain specific constraints. In a hypothesis testing system, those constraints might place prohibitions on certain classes of hypotheses, either stated as such or falling out of the syntax of how hypotheses are computationally evaluated. In a connectionist net, a particular configuration of units, links, and weights might have boundary conditions that cannot be exceeded and only certain kinds of feedback might be allowed. Nowhere is there a rule or statement, yet the net as a whole might well be domain specific in that it works very well for learning about grammatical rules but very badly for learning about faces. Some argue that connectionist nets cannot in principle learn real rules (Marcus, in press); but I remain more agnostic as PDP models proliferate in many different forms and because no one yet really seems to know how to describe their formal computational powers. It will be interesting if all connectionist architectures do end up having absolute constraints on what they can learn; but whether they do or do not, there might also be additional domain specific constraints on what different nets can learn.
Domain specificity here does not require absolute unlearnability of that which violates the constraints. It could also refer to relative ease of learning. Maybe, just maybe, bonobo chimps can acquire the rudiments of natural language; but even if they can, they may do so in a way that is wholly different from how humans do, a much harder, slower and error prone way. I can train our Lhasa Apso to retrieve, but no matter how hard I try, she doesn't take to it as easily and naturally as our Labrador Retriever did; but she takes much more easily to learning certain guard dog behaviors than the Labrador Retriever did. Pigeons can be trained to flap their wings to get food and to peck to avoid shocks but only in the course of heroic training regimes that are orders of magnitude more intense than training them to flap their wings to avoid shocks and peck to get food. You don't need absolutes to invoke domain specificity.
If specific rules are innate, or concepts, or beliefs, then relative ease of learning does not seem very relevant. If constraints on learning are innate, then they can be seen as guiding biases rather than absolutes. Moreover the ways in which the constraints are stated can be wide open, ranging from symbolic rules on what is and is not allowable to ways in which the computational machinery 'resonates' better with some patterns of information than others. Some computers, because of their configurations of hard drive, ram, clock speed and flow bottlenecks, play video games better than other computers that might be better at spreadsheets. But no one can look inside the software or hardware and find symbolic rules stating constraints on computing spreadsheets vs. games. In other cases, with dedicated processing components, such rules can be found, but they don't have to be present to still yield a kind of domain specificity.
Suppose it turns out that there is an interaction between aspects of mid level vision, a chunk of medial temporal cortex and prefrontal cortex, that in the aggregate learns especially well about beliefs and desires of others and learns poorly about the movements of inanimate objects. There may be nowhere in that system a rule that is stated as such, but the system still embodies domain specific constraints and quite possibly rich and "high level" enough ones to count as squarely nativist
The point about ease of learning is that, empirically, we often need to look more closely at the specific trajectories of learning as well as at how learning patterns change across ranges of inputs to decide between nativist and empiricist claims. Simply showing that something can be learned is frequently not enough. This is also why species comparisons are tricky. We might be able to learn language much better than chimps because we have a dedicated language learning system and they don't, or because we are have a bigger, faster, smarter all purpose learning system. We have to look at the details of how other animals learn what they do to get a better idea of whether they are doing it like us on a smaller scale or in a different and less adapted manner. Ease of learning issues also allow us to see a much more reasonable take on poverty of stimulus notions. A set of input information might certainly be much more impoverished with respect to one learning system than another making learning that much slower or error prone. It is hard to prove absolute learnability in most real systems, but much easier to show tradeoffs between systems in what they can learn more easily.
Perhaps because she doesn't explore the constraints notion much outside of grammar, Cowie hardly addresses the relative ease of learning and its relevance to domain specificity and species specificity. This omission is troubling for it may be the best way of understanding what a coherent nativist view of concepts might mean.
If one sees most concepts as partly arising from how they are situated in larger systems of thought, a set of implicit domain specific constraints may be the best way of understanding a nativist view of concepts. Many concepts themselves are not innate, but their nature may be heavily biased by the roles they play in various innate "modes of construal" (Keil, 1995). A mode of construal can be seen as an implicit set of expectations about likely patterns in a domain. Some (e.g. Atran, 1996) argue that we have an innate folk biology. If so, it won't innately include concepts of dogs, pineapples, and wallabies. But it might include biases to assume that there are kinds of things that have essences and vital forces (these assumptions needn't be strictly correct to be useful in learning), that are arranged in rich taxonomies, that can grow, that can be understood in teleological terms, and for which colors tend to play important causal roles (as opposed to most artifacts).
We can't possibly predict which animal, plant or functional notions in biology will emerge from such a construal alone, but we can predict what would be a non natural biological concept. The biological mode of construal provides a skeleton in which only a subset of all concepts about living things are natural. This is a nativist view of biological concepts even if no concept in particular is predestined. It is nativist because the constraints that give rise to a particular class of biological concepts would be very different from those that give rise to number concepts or tool concepts and because those constraints hang together in a coherent way that makes learning about the living world especially easy. We come prewired "knowing" or perhaps "cognizing" something about the living world, something that can be understood as a set of constraints that come into play in learning about the living world and perhaps nowhere else. They might come into play in very different ways. Perhaps certain perceptual triggers tell a particular set of constraints to take over, or perhaps several sets of constrained learning systems each compete to process a particular kind of information and one wins by dint of making more progress because its constraints make that information more tractable. Beyond those details, however, there do seem to be ways to think about innate domain specificity for a great many of our learned concepts and it is surprising that Cowie does not attempt to find a version of nativism that could work for most concepts.
Ultimately, one major contribution of this book is that, in addition to helping us more see clearly the conceptual issues concerning nativism, it helps us go further down the road towards understanding how we can resolve debates through observations and experiments on creatures that think and learn. Contrary to claims that the nativism/empiricism contrast is empty, this Cowie powerfully supports the view that there are real issues out there for which real answers can be discovered. The final story may be a good bit messier than we think. There might be large, loosely constrained domains, and small tight automatic ones linked to such things as phobias. There may be cases that seem sort of dumb in terms of psychological mechanism and sort of low level in terms of where the domain specificity occurs and both nativists and empiricists may shrug about such cases as not decisive; but there will be many others where the data will come down very clearly and we will all be grateful to Cowie for helping us see why.
Allison, T., Puce, A., Spencer, D. D; McCarthy, G. (1999). "Electrophysiological studies of human face perception. I: Potentials generated in occipitotemporal cortex by face and non-face stimuli" Cerebral Cortex. Vol 9, 415-430
Armstrong, S., Gleitman, L., & Gleitman, H. (1983)."What some concepts might not be" Cognition, 13, 263-308.
Atran, S. (1996). "Knowledge of living kinds" In: Causal cognition: A multidisciplinary debate. Sperber, D., Premack, D., Premack, A. (Eds.) Oxford, England, Oxford University Press: pp. 205-233.
Cosmides L., Tooby, J. (1994). "The evolution of domain specificity: The evolution of functional organization" in L. A. Hirschfeld and S. A. Gelman (Eds.) Mapping the Mind: Domain Specificity in Cognition and Culture. Cambridge, Cambridge Univ. Press.
Devlin, J.T; Gonnerman, L. M; Andersen, E. S; Seidenberg, M. S.(1998) "Category-specific semantic deficits in focal and widespread brain damage: A computational account" Journal of Cognitive Neuroscience. Vol 10, 77-94.
Farah, M. J; McClelland, J. L. (1991) "A computational model of semantic memory impairment: Modality specificity and emergent category specificity" Journal of Experimental Psychology: General. Vol 120, 339-357.
Fodor, J. A. (1981). "The current status of the innateness controversy", in Representations: Philosophical essays on the foundations of cognitive science. Cambridge, MA: MIT Press.
Fodor, J. A. (1998). Concepts: Where Cognitive Science Went Wrong. Oxford: Oxford University Press.
Frith, U. (1999) Autism. In R.A. Wilson and F.C. Keil (Eds.) The MIT Encyclopedia of the Cognitive Sciences,(pp.58-59). Cambridge, MIT Press.
Gauthier, I.; Tarr, M. J; Anderson, A. W; Skudlarski, P.& Gore, J. C. (1999) "Activation of the middle fusiform "face area" increases with expertise in recognizing novel objects" Nature Neuroscience. Vol 2, 568-573.
Johnson, M. H., & Morton, J. (1991). Biology and cognitive development: The case of face recognition. Cambridge, MA: Blackwell.
Kanwisher, N.(1998) "The modular structure of human visual recognition: Evidence from functional imaging" In Sabourin, M. and Craik, F. et al. (Eds) Advances in psychological science, Vol. 2: Biological and cognitive aspects. (pp. 199-213). Hove, England UK: Psychology Press/Erlbaum (Uk) Taylor & Francis.
Keil, F. (1981). “Constraints on knowledge and cognitive development” Psychological Review 88: 197-227.
Keil, F. (1989).Concepts, kinds and cognitive development, Bradford Books. Cambridge, MIT Press
Keil, F. (1995). "The growth of causal understandings of natural kinds" In: Causal cognition: A multidisciplinary debate. Sperber, D., Premack, D., Premack, A. (Eds.) Oxford, England, Oxford University Press
Keil, F., Smith, C., Simons, D., & Levin, D. (1998). "Two dogmas of conceptual empiricism" Cognition, 65, 103-135.
Leslie, A. M. (1995). "A theory of agency" in D. P. D. Sperber, & A. Premack (Ed.), Causal cognition: A multi-disciplinary debate, Oxford, England: Oxford University Press.
Marcus, (in press,). "Rethinking eliminative connectionism" Cognitive Psychology
Nakayama, K., He, Z. J, & Shimojo, S.. (1995) "Visual surface representation: A critical link between lower-level and higher-level vision" In. S. M. Kosslyn & D.N. Osherson, Visual cognition: An invitation to cognitive science, Vol. 2 (2nd ed.). An invitation to cognitive science. (pp.1-70). Cambridge, MIT Press.
Osherson D. N. and Wasow, T. (1976). “Task-specificity and species-specificity in the study of language: A methodological note” Cognition 4: 203-214.
Sloman, S. A. (1996). "The empirical case for two systems of reasoning" Psychological Bulletin, 119 (1), 3-22.
*_ Preparation of this paper was supported by NIH Grant R01-HD23922. I am thankful to Kristi Lockhart for many helpful comments on an earlier draft of this paper.