The Space of Reasons vs. the Space of Inference: Reply to Noë

by Susan L. Hurley

Department of Philosophy
University of Warwick (UK)

I will reply to Noë’s insightful discussion under five main headings. The bottom line is that I agree with most of what he says, with one important proviso: my agreement turns on the view that acting for reasons does not require full-fledged conceptual and inferential abilities. I appeal to practical reasons in particular to argue that the space of reasons is not coextensive with the space of inference. This point aside, the distance between our views is minimal. I therefore devote most of my response to this point, developed in the first two sections below. I will begin with a relatively self-contained account of why I hold this view, which develops further my position in Consciousness in Action, and will then draw on this account in my responses to Noë.

1. Having reasons does not require full-fledged conceptual abilities. The motivation for holding this derives from practical reason, not epistemology.

Does the having of reasons require conceptual abilities? It may seem to if we focus on perception and belief and theoretical reasons, as opposed to intention and action and practical reasons.1

Suppose we grant that nonconceptual content is not needed to provide an epistemological grounding for perceptual and theoretical beliefs, and that indeed it could not do this work in any case. This will only seem decisive on the issue of whether reasons require conceptual contents if we have already overintellectualized the mind by giving epistemology priority over practical reason. If there is a case for giving either priority, reasons for action are primary and reasons for belief derivative. Even if reasons for belief must be conceptual, it would not follow that reasons for action must be, since reasons for action are not reasons for belief about what should be done (Hurley 1989, ch. 7-9). Practical rationality is not theoretical or inferential rationality with practical content.

The most powerful motivation for admitting that reasons need not be conceptualized derives from practical reason rather than epistemology. An intentional agent who lacks conceptual abilities and does not conceptualize her reasons can still act for reasons that are her own, from her point of view.2 Reasons for action can be context bound and lack conceptual generality. There can be islands of practical rationality. This possibility becomes clear when contact is made with empirical work.

If someone’s states have conceptual content, he must have conceptual abilities.3 If information that an object has a property is conceptualized, it has a structure that enables the subject to decompose and recombine its elements promiscuously in other contexts, and to generalize and make quantificationally-structured inferences that depend on such context-free decompositional structure. His reasoning abilities are governed by correspondingly rich normative constraints, and are not context-bound but extend systematically to states of affairs removed from his immediate environment and needs. I am inclined to agree with McDowell (1994) that conceptual abilities in this sense come with language (though I would allow that both may come by degrees; see below).

Intentional agency--something that many animals have and plants lack--makes normative space between a mere stimulus-response system and conceptual abilities (Hurley 1998). A creature that acts intentionally acts for reasons. Relations between stimuli and responses are not invariant. Rather, actions depend holistically on normatively constrained relationships between motor intentions and perceptions, between ends and means. A given intention will yield different actions given different perceptions, and vice versa. Actions can be understood as mistaken or inconsistent or instrumentally irrational. Means and end can decouple: an intentional agent can try, err, and try again, can try various different means to achieve the same end. These features of intentional agency make for a minimal kind of recombinant structure: an intentional agent has the ability to combine a given intention with different perceptions, given ends with different means. This is not merely a complex pattern of dispositions; it essentially involves normative constraints (Hurley 1989). The holism and normativity here invoked are of a kind familiar from the writings of Davidson, Dennett and others, though applied to perceptions and motor intentions rather than beliefs and desires, and detached from requirements that the creature have conceptual abilities or itself be an interpreter. Such holism and normativity characterize the personal or animal level, at which it is correct to regard an agent as acting for reasons that are its own, from its own point of view.4

The relatively weak structure and normativity of intentional agency contrasts with the richer structure and normativity of conceptual abilities. An intentional agent has a point of view from which reasons for action register, and she can act for such reasons. Acting for a reason, rather than merely in the presence of or in agreement with a reason, requires the reason to cause the action ‘in the right sort of way’. But this does not require the reflective, context-free, inferentially promiscuous understanding of the reason that goes with conceptual abilities. Reasons can be available to an agent from her point of view even though they are bound to particular contexts and do not generalize. An agent may perceive and act for a reason in a particular context without propositional premises and conclusions being available to her: her own reasons for action may be externalistically constituted. Again, having a reason for action is not a matter of inferring a belief about what should be done. The agent may not generalize or theorize about such a reason; her intentional agency may be expressed in context-bound islands of practical rationality. Yet she may be aware of why she should act a certain way, and of being wrong if she does otherwise, in that context. And she may also be quite capable of doing otherwise, for example, given some different background intention.

These general points apply to what I call perspectival self-consciousness (Hurley 1998; see and cf. Van Gulick 1988; Bermúdez 1998). Part of what it is to be in conscious states, including perceptual states, is to have a unified point of view, from which what you perceive depends systematically on what you do and vice versa, and such that you keep track, at the personal level, of this interdependence of perception and action. Such perspectival self-consciousness essentially involves ordinary motor agency as well as perception. When I intentionally turn my head to the right, I expect the stationary object in front of me to swing toward the left of my visual field. If I intentionally turn my head and the object remains in the same place in my visual field, I perceive the object as moving. If my eye muscles are paralyzed and I try to move them but fail, the world around me, surprisingly, appears to move.

Such perspectival self-consciousness can but need not be conceptual or inferential. As an animal moves through its environment, its intentional motor actions dynamically control its perceptual experience in the face of exogenous environmental disturbances, simultaneously with its perceptions providing reasons for action. It can keep track of contingencies between its perceptions and motor intentions, in a practically if not theoretically rational way. In doing so it can use information about itself and its environment intelligently, to meet its needs.

Such a perspective is correctly described at the animal rather than the subanimal level. But it doesn't follow that the animal has a general concept of itself or its conscious states, or the ability to reason theoretically or systematically about aspects of self and environment in a variety of ways detached from its needs. Its perspectival uses of information about itself may be context bound. Its perspectival self-consciousness can be externalistically constituted.

Can we find more concrete illustrations of intentional agency without conceptual abilities? Here are some cases worth considering in this light.

(1) Symbolic liberation. Sarah Boysen’s chimp Sheba displays an island of instrumental rationality that does not generalize. Sheba was allowed to indicate either of two dishes of jellybeans, one containing more than the other. The rule was: the jellybeans in whichever dish Sheba indicated went to another chimp, and Sheba got the jellybeans in the other dish. Sheba always chose the dish containing more jellybeans, even though this resulted in her getting fewer. Despite her apparent frustration, she seemed unable to indicate the smaller amount in order to get the larger amount. Boysen next substituted numerals in the dishes for actual jellybeans. She had previously taught Sheba to recognize and use the numerals ‘1’ through ‘4’. Immediately, Sheba began to choose the smaller numeral, thereby acquiring the correspondingly larger number of jellybeans for herself. The substitution of numerals seemed at once to free her to act in an instrumentally rational way, as she had been unable to when faced directly by the jellybeans. When the numerals were again replaced by jellybeans, Sheba reverted to choosing the larger number.

(2) The contexts that bind: social relations, detection of cheating, competition vs. cooperation. Tomasello suggests that nonhuman primates have a special ability to understand the social relations of conspecifics that hold among third parties, such as the mother/child relation. They are also unusual in their ability to learn relations among objects: for example, to choose a pair of objects that display the same relation as a sample pair. However, mastering relations among objects is a difficult task for nonhuman primates, taking hundreds or thousands of trials, whereas understanding of third party relations among conspecifics is seemingly effortless. Their skill with relations fails to generalize smoothly from the social to the nonsocial domain.

Cosmides’ work on the Wason effect suggests that even for human primates certain inferential skills are bound to certain social contexts and fail to generalize. Wason asked people to test a simple instance of "p implies q": if a card has "D" on one side, it has "3" on the other side. Subjects observed 4 cards, showing on their upturned sides: D, F, 3, 7. They were asked which cards they should turn over to determine whether the rule was correct. The right answer is: the D card and the 7 card. Most people (90-95%, including those trained in logic) choose either just the D card or the D card and the 3 card.

But Cosmides shows that people do get the right answer when they are asked to test instances of "p implies q" that describe an exchange of the form: if you take a benefit, you must meet a requirement. People are very good at detecting cheaters; they can readily perceive reasons to act so as to flush out cheaters. But their reasons are highly context-dependent, and do not generalize, even to other social contexts. People do not get the right answer even for: if you meet a requirement, you get a benefit. When an agent acts on her perceptions so as to flush out a cheater, she can be acting on her own reasons, available from her point of view, even though they are not inferentially promiscuous. She may have conceptual abilities, but not use them.

An interesting though speculative recent twist on Cosmides’ demonstration of the context-bound modularity of reasons is suggested by early results of nonverbal false belief tests given to chimps and dolphins. These nonverbal false belief tests are being developed and applied by Josep Call, Michael Tomasello, and colleagues, in a series of recent articles and work in press and in progress. The interpretation I consider below of this work is highly speculative and cannot be attributed to the researchers as their considered view; they are still reserving judgement, with proper scientific caution. The interpretation I consider is merely one possible interpretation of early results, and is subject to further empirical work. But it does serve to illustrate a relevant possibility, for present purposes.

One version of a nonverbal mind-reading test uses a hider/communicator paradigm, and has been applied to children and to chimps (Call et al, 1999, 2000; work is in progress with bottlenose dolphins). In this paradigm, the subject perceives two opaque boxes, which are then hidden from her view. The hider then proceeds to hide a reward in one of the two boxes, while the communicator watches. The subject cannot see which box the hider puts the reward in. But the subject can see that the communicator can see which box the hider puts the reward in. The barrier is removed, and the subject is then allowed to choose between the two opaque boxes, still unable to see which contains the reward. The communicator truthfully indicates to the subject which box the reward is in. The subject learns to choose the box the communicator has indicated in order to obtain the reward.

In the critical trials, the procedure is altered as follows. After the hider has put the reward in one of the boxes, the communicator leaves the scene. The barrier is removed so that the subject can see the boxes, though not which box contains the reward. While the subject watches, the hider switches the positions of the boxes. The communicator then returns, and indicates the box not containing the reward to the subject, since this box is now in the position of the box into which the communicator saw the reward placed. The subject can choose either the box indicated by the communicator, or the other box. The correct response is to choose the other box, since the communicator did not see that the boxes were switched and thus has a false belief about which contains the reward.

When this nonverbal test of mind-reading ability is applied to children of varying ages, the results are strongly correlated with the results of verbal false belief tests. In general, children under 4 make in wrong response, and select the box the communicator indicates in the false belief trials as well as the control trials. Children over 5 make the correct response-- they select the box not indicated by the communicator when the boxes are switched, though they select the box indicated by the communicator in the control trials. Chimps fail profoundly in the false belief trials.

If this test is accepted as an indication of the ability to reason about the mental states of others, chimps appear to lack such ability. However, these results contrast with results for chimps of a different non-verbal mind-reading test, suggesting that the ability to reason about the mental states of others may be context-dependent.

This second version of a nonverbal mind-reading test uses a dominant/subordinate paradigm (Hare et al 2000, in press). The dominant and subordinate chimp compete for food. In some conditions the dominants had not seen the food hidden, or food they had seen hidden was moved to a different location when they were not watching (whereas in control conditions they saw the food being hidden or moved). At the same time, subordinates always saw the entire baiting procedure and could monitor the visual access of their dominant competitor as well. The results are that subordinates more often approach and obtain the food that dominants have not seen hidden or moved. Similarly, the subordinate gets more food when a new dominant chimp is substituted for the one who saw the baiting. This suggests a kind of mind-reading ability on the part of chimps: that subordinates are rationally sensitive to what dominants did or did not see during baiting.

How should the apparent ability of chimps to reason about the mental states of others in the dominant/subordinate paradigm be reconciled with the apparent lack of this ability in the hider/communicator paradigm? It is too soon to say with any confidence. An empirical speculation of interest here, however, is that the hider/communicator paradigm provides a context in which there is cooperation over finding food, while the dominant/subordinate paradigm provides a context in which there is competition over finding food.5 It may be natural for chimps to compete over food in such a way that their ability to reason about the mental states of others is tuned to competitive practical contexts rather than cooperative ones. This provides another possible illustration of how practical reasons might be context-bound, and fail to have full conceptual generality.

Reasons for action in such cases are not ‘sub-animal’ level phenomena, but can properly be attributed to the intentional agents in question--even if they lack, to greater or lesser degrees, inferential and conceptual abilities. Why? For the old familiar reasons: holism and normativity. Perceptual information leads to no invariant response, but explains action only in the context set by intentions and the constraints of at least primitive forms of practical rationality. Perceptions and motor intentions combine to make certain actions reasonable and appropriate from the animal’s point of view, and mistakes are possible.

The motivation such cases provide for admitting nonconceptual intentional agency is not epistemic, but rather to characterize the practical abilities and points of view of these creatures correctly, as neither too rich nor too impoverished: they can act for reasons while doing very little in the way of reasoning. The normativity of nonconceptual intentional agency plays no role in an epistemological project; animals who display islands of instrumental rationality are not in the business of justifying their beliefs. But the reasons for which they act are nonetheless their own reasons, from their own point of view. Of course, they may not be conceptualized by the animal as reasons--but to require that would be to beg the question at issue.

The issue can be restated as follows. It may seem plausible to give the having of reasons links in two directions, which pull against one another (see e.g. Brewer (1999), 49, 54, 56, 77, 82, 150-52). First, having reasons can be linked with the agent’s point of view: reasons make whatever they are reasons for appropriate from the viewpoint of the agent in question. Second, having reasons can be linked with making general inferences from propositional premises to propositional conclusions, hence with conceptual abilities.

One way of making these links explicit is to claim that having a point of view requires having reasons, and that having reasons requires having inferential and conceptual abilities. But I have urged that having a point of view does not require having inferential and conceptual abilities--at least if the notion of a point of view has intuitive empirical application and is not wholly a theory-driven philosopher’s tool. There is a sense of "having reasons", which relates inter alia to acting for reasons, in which it is plausible that having a point of view requires having reasons: requires that the point of view can be described in normatively constrained, personal-or-animal level terms, including essentially in terms of action for reasons. There may well be another richer or more internalistic sense of "having reasons" that entails inferential and conceptual abilities. But this is not quite the same sense of "having reasons", since having a point of view does not entail inferential and conceptual abilities.

A different way of making the links explicit may be suggested. Perhaps having reasons requires having reasons from a point of view, hence having a point of view (even though having a point of view does not require having reasons). And perhaps having reasons also requires having inferential and conceptual abilities, in the same rich or internalistic sense of "having reasons". The problem then is that there is another sense of "having reasons" which still does not require inferential and conceptual abilities: the very sense, relating to reasons for action, that is plausibly required merely by having a point of view.

Noë urges that to the extent we view an animal as subject to constraints of normativity and holism, as flexibly responsive to its environment in ways constrained by intentions and primitive practical rationality, then to that extent we must admit it possesses, at least to some degree, conceptual and inferential capacities. In highlighting the ways in which islands of practical rationality can be context bound, I have tried to show how practical reasons need not display the context-freedom and full generality of conceptualized reasons. Noë may reply that this strategy presupposes too exalted a conception of our own conceptual skills, which are themselves strikingly context-bound.

In a sense, I can agree with this reply. I was employing a theory-driven conception of conceptual abilities, as necessarily involving context-free reasoning skills or inferential promiscuity. Relative to that familiar conception, I could say that we human-beings ourselves do not provide very good exemplars of such conceptual abilities. But perhaps this very conception of conceptual abilities is at fault, and should be replaced with one that is driven instead by attention to the character of the abilities we paradigmatic concept-users we actually have. This would probably yield a less demanding conception of conceptual abilities as themselves context-dependent. The question then becomes: what exactly is required for conceptual abilities? Is language required, and if so, why? Is anything more required than intentional agency, which already requires holism and normativity in the way explained above?

2. Conceptual abilities are a matter of degree.

At several points Noë suggests that conceptual abilities are a matter of degree, and that there is no sharp division between the conceptual and the nonconceptual. Animals may not lack conceptual skills altogether, he urges, may have at least rudimentary conceptual skills. I think this may indeed be the right thing to say.

We can allow that conceptual abilities are a matter of degree, while sticking to the familiar conception of conceptual abilities in terms of context-free generality. The point then is that context-free generality itself is itself a matter of degree. As I suggested above, the holism and normativity required for mere intentional agency already involve some degree of such generality. Tomasello (1999) argues that an intentional agent is one for whom means and ends decompose and recombine, at least in certain contexts. A creature can be such an agent without being able to understand others as intentional agents, for whom means and ends similarly decompose and recombine. Tomasello suggests that chimps are such agents.

Here is a way of motivating this view of conceptual abilities as admitting of degrees.6 On the one hand, acting for a reason of course does not require the ability to infer everything that follows from it. Not even we human beings can do that, and we are paradigmatic possessors of conceptual abilities and havers of conceptualized reasons. On the other hand, can a creature act for a reason in a particular case, without being any to infer anything that follows from it? At least, the kinds of illustrations offered above would not support this claim. The agents there have some, if rather limited, inferential powers. The middle ground is thus attractive, which holds that acting for a reason requires some degree of ability to generalize one’s reason and inferential promiscuity, even if it is contained within certain contexts (such as the detection of cheaters, or competition for food).

On this view, reasons for action go along with having a point of view, which does not require linguistic abilities or the full-fledged context-free theoretical, inferential, and conceptual abilities that language makes possible. If the kind of minimally structured means/end practical reasoning available to nonlinguistic creatures counts as manifesting rudimentary conceptual abilities, then it’s arguable that the criteria for having a person or animal level point of view at all, holism and normativity, are also criteria for having some degree of conceptual ability. This way of seeing conceptual abilities as a matter of degree contrasts with much of the discussion of conceptual vs. nonconceptual content, which gives the distinction an all or nothing character. One advantage of focussing as I have on conceptual abilities instead of conceptual content is that this lends itself to displaying differences of degree by reference to empirical examples.

Even so, I doubt that acting for a reason is most fundamentally a matter of inference, as opposed to a matter of being normatively constrained. Formal patterns of inference are not sufficient to provide reasons for action; exogenous substantive constraints on the interpretation of such patterns are required (Hurley 1989). We need not choose between a conception of reasons and justification as necessarily inferential, on the one hand, and the absence of reasons that are reasons for the agent, on the other: from Wittgenstein, among others, we learn that this neo-Cartesian vs. behaviorist dichotomy is spurious. The space of reasons is not coextensive with the space of inferences--mental ‘acts’--, but rather with the space of intentional actions at large, essentially embedded in a natural order that can provide normative constraints directly.

3. Perspectival self-consciousness is an essential aspect of the unity of consciousness and so of being in conscious states, including conscious perceptual states.

I say (1998, 140) part of what it is to be in conscious states is to have a unified perspective, and that this in turn involves perspectival self-consciousness. The conscious states in question include perceptual states. Thus I certainly agree with Noë that perspectival self-consciousness is an essential aspect of perception, rather than a precondition of it. It’s not clear to me whether he thinks I might disagree on this point, or if so, why.

4. Nonconceptual ‘scenario’ content is indeed "phenomenologically wrong-headed and empirically ungrounded".

I have no sympathy with the idea of nonconceptual scenario content, which I agree is phenomenologically wrong-headed and empirically ungrounded. I would add that it is marshalled to play a role in an epistemological project with which I also have little sympathy. My own reasons for thinking that perspectival self-consciousness does not require conceptual abililities owe nothing to these dubious enterprises, and my position should be clearly distanced from them. My views, by contrast, derive from the ways in which practical rationality can come apart from full-fledged conceptual skills, as explained above.

I agree with Noë that change blindness and inattentional blindness do not challenge naive phenomenology, which has no commitment to anything like scenario content. We do naively take ourselves to be able to see the detail in the world, by actively looking around as needed. Moreover, we are right about this, and this truth is not challenged by change blindness. On the other hand, we do not naively take ourselves to have an instantaneous internal model of all the detail we can access by looking around at the world; whether we have such an internal model is not something on which phenomenology (as opposed to various theoretical commitments) pronounces. The fact that many people are surprised by change-blindness does not support this attribution unless other possible explanations for such surprise are ruled out.

However, Noë goes on to claim that the activity that gives us visual access to the world includes attention, which is a kind of conceptualizing activity. I agree that conceptualizing activity can give us visual access to the world: for example, when I attend to the duck rather than the rabbit. However, I don’t accept that activity that provides creatures with visual access to the world must include conceptualizing activity. Intentional motor activity will do. Motor intentions are very closely tied to attention, of course. While I have no account to offer of attention, it is plausible to assume that creatures with rich motor skills but relatively lacking in conceptual skills can nevertheless visually attend to this or that. That is, visual attention can be associated with motor activity as well as with conceptualizing activity. So I don’t think that appealing to the active character of seeing or to attention show that seeing requires full-fledged conceptual abilities.

5. We can aim to relate the personal and subpersonal levels without assuming isomorphism between them or making vehicle/content projections.

Noë advocates a holism that he takes to be more thoroughgoing than my own, which integrates perception at the personal or animal level with broader capacities for 1) intentional action and 2) thought. On the integration of perception with intentional action, I am puzzled that Noë regards his view as going further than mine in Consciousness in Action. I there argue at length and in detail for the deeply activity-dependent character of perceptual experience, and indeed for the co-constitution of perception and intentional action. It’s not clear to me how his view goes further than mine toward integrating perception and action.

He may well however wish to go further than I toward integrating the perception/action system and thought, however. Again, my resistance here is based on consideration of creatures relatively lacking in conceptual and inferential skills and in the capacity for thought, but who are nevertheless perceptual subjects and intentional agents, who can act for reasons delivered by perception. However, it may be that the difference between us here evaporates with my endorsement of the view that conceptual abilities are a matter of degree, that there is no sharp division between the conceptual and nonconceptual.

As for whether there is similarly no sharp line between the personal and the subpersonal : this is a distinction of levels of description. Ultimately it turns on the normativity characteristic of the personal level of description: the contents of mental states described at the personal level are normatively constrained and related, both formally and substantively. Normativity does not require consciousness; there can be relations of inconsistency, for example, between the contents of unconscious beliefs or desires. So the personal level should not be identified with consciousness.

In Consciousness in Action I argue for an essentially two-level view of the unity of consciousness and the interdependence of perception and action. I reject the assumption that there must be isomorphism between the personal and subpersonal levels of description, and the accompanying tendency to project properties and structure from subpersonal vehicles of content into content and vice versa. Nevertheless, I do not regard this distinction as a mysterious gulf. I argue that we can, without making objectionable vehicle/content projections, say something about the relations between subpersonal vehicles of content and personal-level contents of mental states (p. 445). Dynamic singularity is a subpersonal level conception, but I argue that the unity of consciousness has essential personal level, normative aspects as well as a subpersonal aspect I describe along the lines of dynamic singularity. Similarly, I argue that the contents of perception and of intentional action at the personal level are interdependent, and that this can be understood in terms of the way they co-depend on a subpersonal dynamic singularity. In this sense, I argue that the dynamic singularity conception has implications at the personal level. But it is itself described at the subpersonal level, in nonnormative, causal terms.



1_ For example, in a recent book (1999), Brewer has argued that reasons must be conceptual. The basic argument has two steps. First, giving reasons requires identifying propositions as premises and conclusions of the relevant inferences. Second, for reasons to be the subject's own reasons, at the personal level and from his point of view, they must consist in some mental state of his that's directly related to the propositional premise of the relevant inference: the premise proposition must be the content of the mental state in a sense that requires the subject to have all the constituent concepts of the proposition. Otherwise, the mental state will not be the subject's own reason (Brewer 1999, 150-152). However, Brewer's discussion is one-sidedly oriented toward perception and belief, as opposed to intention and action. He typically speaks of reasons for judgements or beliefs, adding parenthetically: "(or action)", to keep practical reasons in play (e.g. 150, 151, 168). However, these gestures toward action don't do the work needed. See my discussion in Hurley (forthcoming), which the present discussion draws on heavily.

2_ I speak in terms of conceptual abilities, which are less abstract and contentious and more operational than conceptual content, assuming that whatever conceptual content is, it requires conceptual abilities.

3_ Cf. MacIntyre (1999).

4_ The personal level is here understood as the locus of normative/rational contraints, not in terms of consciousness. In Freudian examples, or cases of self-deception, the partitioning of an agent into subsystems may be driven by normative constraints of consistency. Even if some such subsystems are unconscious, they would still count as at the personal level. The subpersonal level is understood as the level of causal/functional description at which talk of normative constraints and reasons no longer applies.

5_ This was raised as one possibility by Josep Call, in discussion.

6_ Thanks here to Ram Neta.


Bacharach, Michael, and Hurley, Susan (1991), Foundations of Decision Theory (Oxford: Blackwell)

Bermúdez, José (1998), The Paradox of Self-Consciousness (Cambridge: MIT).

Boysen, Sally, and Bernston, G. (1995), "Responses to Quantity: Perceptual vs. Cognitive Mechanisms in Chimpanzees (Pan troglodytes)", Journal of Experimental Psychology and Animal Behavior Processes 21, 82-86.

Boysen, Sally, Bernston, G., Hannan, M., and Cacioppo, J. (1996), "Quantity-based Inference and Symbolic Representation in Chimpanzees (Pan troglodytes)", Journal of Experimental Psychology and Animal Behavior Processes 22, 76-86.

Brewer, Bill (1999), Perception and Reason (Oxford University Press).

Call, Josep, and Tomasello, Michael (1999), "A Nonverbal Theory of Mind Test: The Performance of Children and Apes", Child Development 70, 381-395.

Call, J., Agnetta, B., and Tomasello, M. (2000), "Social Cues that Chimpanzees do and do not use to find Hidden Objects", Animal Cognition 3, 23-34.

Cosmides, L. (1989), "The Logic of Social Exchange: Has Natural Selection Shaped how Humans Reason? Studies with the Wason Selection Task". Cognition 31, 187-276.

Hare, B., Call, J., Agnetta, B., and Tomasello, M., (2000), "Chimpanzees know what conspecifics do and do not see", Animal Behaviour 59, 771-785.

Hare, B., Call, J., Tomasello, M. (in press), "Do Chimpanzees Know what Conspecifics Know and do not Know?", Animal Behaviour.

Hurley, Susan (1989), Natural Reasons (New York: Oxford).

Hurley, Susan (1998), Consciousness in Action (Cambridge: Harvard).

Hurley, Susan (forthcoming), "Overintellectualizing the Mind", Philosophy and Phenomenological Research, 2001.

MacIntyre, Alasdair (1999), Dependent Rational Animals: Why Human Beings Need the Virtues (London: Duckworth).

McDowell, John (1994), Mind and World (Cambridge: Harvard).

Tomasello, Michael (1999), The Cultural Origins of Human Cognition (Cambridge: Harvard).

Van Gulick, Robert (1988), "A Functionalist Plea for Self-Consciousness", Philosophical Review XCVII/2, 149-181.

Wason, P (1966), "Reasoning". In B. Foss, ed., New Horizons in Psychology (London: Penguin).

top | back to symposium index | Noë's paper