Some Awkwardness in Poised Content?

by William Seager e-mail

Division of Humanities
University of Toronto at Scarborough
Scarborough, Ontario
M1C 1A4 Canada


Although the problem of consciousness seemed to be something of a side-issue in the development of naturalistic accounts of mind in the mid to late twentieth century, real progress has been made recently. We now possess, thanks to Michael Tye and other philosophers, an impressive variety of sophisticated theories of consciousness. To my mind, the most significant of these are the representational theories of consciousness.

Michael Tye's Consciousness, Color and Content (2000) expands, extends and defends the theory of consciousness which he advanced in Ten Problems of Consciousness (1995). Tye's theory, as a representational theory of consciousness, asserts that what consciousness is is the active, or at least potentially active, presence within a cognitive system of a set of representations with certain well-defined properties. This is, in one way or another, common to all representational theories. But for Tye, all of consciousness is a matter of representational content. Thus even qualitative states of consciousness, the taste of coffee, smell of rose, twinge of pain, etc. are to be understood as representational content, though of a rather special sort. This feature of Tye's view is controversial even within the representational camp, since the 'phenomenality' or qualitative nature of mental states can be regarded as a property of such states in its own right rather than a matter of what or how non-phenomenal features are represented. The prospect of explaining the qualitative aspect of consciousness is the most exciting feature of Tye's representational approach.

Of course, not everyone agrees that the best approach to consciousness is so thoroughly representational (Ned Block has presented several arguments against the representational view; see Block (1990) for example), but I do not want to question the representational theory's basic premise here. Instead, I will raise some issues from within the representational camp.

There are now several versions of the representational theory of consciousness. An important distinction within the field is between 'higher-order' and 'first-order' theories. The former assert that a state, S, is a conscious state if it is the object of a higher-order mental state - a state which represents or is about the original state. Important distinctions can be drawn within the higher-order theories as well. There are higher-order thought theories, in which consciousness is the result of a higher order state which is a thought about the lower order state (see Rosenthal 1986); other theories prefer to regard the higher order state as a kind of perception of the lower order state (see Lycan 1996).

It will be important for us to also distinguish between actualist and dispositionalist versions of these theories depending on whether a state's being conscious depends on the actual presence of the requisite higher order state or whether a mere disposition to bring about the higher order state is sufficient for the lower order state to be a conscious state (a very well worked out dispositional higher order thought theory of consciousness has been developed by Carruthers (2000)).

First order representational theories of consciousness, of which Tye's theory is an example (see also Dretske 1995), do not require any relation to a higher order state which is about a lower order state for that state to be a conscious state. Since all of the theories at issue here tend to accept pretty standard cognitive science accounts of the mind, there is a huge supply of representational states presumed to be active within any working cognitive system. While higher-order theories have a clear account of what makes a particular representational state conscious, they face a problem about distinguishing mere consciousness from introspective consciousness. On the other hand, first order theories face the non trivial task of distinguishing those representational states which are states of consciousness from the plethora of representational states that are not. This leads, in Tye's theory, to a constraint upon which representational states are conscious somewhat similar to that imposed within higher-order theories.

However, before proceeding with this line of thought, which leads deep into some core issues of representational accounts such as Tye's, I want to digress briefly and introduce a small, and what I hope Tye would regard as a friendly, amendment to his account of conscious pain. It does not strike me as correct that animals cannot 'suffer' pain (see p. 182). This could be regarded as a merely verbal matter if Tye wishes to define suffering as being the introspective awareness of pain. But I see nothing to recommend this linguistic legislation. Normally we regard the suffering as what is bad about being in pain, and if animals cannot suffer then there doesn't seem to be anything bad about their being in pain and their pain loses (at least a major part) of its moral significance. It also seems intuitively compelling, to me at least, that there are pains which are so intense that they can destroy one's ability to introspect without destroying one's consciousness and while preserving the suffering. These pains are themselves so vicious that they obliterate any awareness of them as states of mind, leaving only the searing awfulness - which is certainly a state of consciousness - which they intrinsically possess.

Of course, on a first order representational account of consciousness whatever qualitative feature states of consciousness possess 'intrinsically' is a matter of what they represent. Thus I suggest that the theory recognise the representation of 'evaluative properties' as features of states of consciousness. Evaluative properties are properties of things (that is, intentional objects of states of consciousness), or putative properties of things, which make them good or bad. A pool of water, to a thirsty man, is represented as clear, cool, wet and good-to-drink. There is no inference to this 'primitive attractiveness' of water when thirsty, though there might be an inference that this water is, in truth, not good to drink (because, say, it is suspected to be poisoned). A pain is a representation of a certain part of the body being damaged in a certain way, but also possesses a distinctive negative evaluative representational component which is its painfulness. Suffering can then be defined as being in a state of consciousness with that kind of negative evaluative representational component.

Evaluative properties are complex, since they have a built in relativity to the kind of being representing them and the state of the being doing the representing. Water does not always look good to drink, but when it does there is a distinctive element of consciousness which cannot be neglected. To take another and well known example, consider the analgesic effect of opiates such as demerol. It is common for these to relieve pain, as we say, without actually obliterating the consciousness of the state of the body which has been damaged. Jeffrey Foss provides a harrowing autobiographical account and enlightening discussion of the biochemistry of the curious analgesic effect of demerol on serious pain in Foss (2000). If the representation of evaluative properties is included in our theory of consciousness, this effect of demerol has a straightforward explanation. Crudely put, demerol works by blocking the representation of the distinctive negatively evaluative property central to pain. After the injection of demerol, one's cognitive machinery is still representing the damage to one's body; one still can introspect the pain, and know (directly) that it is pain, but it has changed, and the change is that the pain is not so bad anymore. Foss describes the phenomenology of this as 'my leg still screamed, but I was no longer inclined to pay any attention ... my intellectual comprehension just did not translate into caring, much less action' (p. 146). In fact, one might speculate that the consciousness of this kind of value is the most basic feature of sensory consciousness, the feature that drove the refinement of the more familiar sensory modalities as a way to facilitate the indirect, behavioural, alteration of these fundamental evaluative aspects of consciousness (in a slogan: approach the good, avoid the bad).

I will not try to develop this idea any further here, though I think it is central for the extension of the representational account to more complex forms of consciousness. I do want to note some of the areas in which it is important. It is crucial for the proper treatment of the consciousness of various kinds of value, and, not unrelated, for the correct account of emotional consciousness. It is also important for the extension of the theory's account of introspection beyond that of purely sensory states (see Seager 2000). Furthermore, it is I think necessary for the proper account of the difference between states of consciousness which are motivating and those which are not; crudely, it is the representation of evaluative properties which underlies the way perception, and other forms of consciousness, motivate us to actually do something rather than passively observe. Finally, if somewhat speculatively, the acceptance of evaluative properties as features of states of consciousness suggests an interesting and naturalistic approach to ethics or at least 'moral consciousness'. However, the main point I want to make here is simply that the incorporation of evaluative properties is entirely within the spirit of the representational theory of consciousness and relieves representationalists of the unpleasant and implausible proposition that animals, and children, cannot suffer even when in obviously excruciating pain.

Now, to return to the main line of argument, recall that I asserted that Tye's theory has a feature surprisingly similar to that of higher-order theories, namely a requirement for a state's being conscious that it bear a certain relation to thoughts. To explain this, note first that Tye's theory appears to be restricted to an account of sensory consciousness, conceived sufficiently broadly to allow bodily sensations to count as sensory states. So, if conscious thinking involves a kind a consciousness which is non-sensory, which intuitively appears to be the case, Tye's theory does not apply to conscious thinking. Thus there is something of a problem generating a unified account of consciousness from Tye's perspective. This difficulty does not arise for higher order theories It is worth noting, however, that another problem of unification afflicts at least some higher-order thought theories. Some of these (see Rosenthal (1986) for example) regard phenomenality as a feature of mental states in its own right, one that can characterize mental states whether or not they are conscious. Thus there seems to be a kind of disunity in their treatment of phenomenal consciousness versus conscious thought.

In any case, Tye asserts that for a sensory state to be conscious its representational content must be available to more complex cognitive mental states which mediate between the sensory content and behaviour. We might call such states 'higher level' to distinguish them from 'higher order' states which have lower order mental states as their intentional objects. Beliefs and desires would be the most natural examples of such higher level mental states but Tye is clear that cognitive states that perhaps do not fully merit the status of beliefs and desires will suffice to underwrite states of sensory consciousness. Thus certain animals which we may doubt have full-fledged beliefs and desires can still enjoy states of sensory consciousness insofar as they use the content of their sensory representations to facilitate at least somewhat intelligent, unfixed, non-automatic and learned behaviour. Tye goes so far as to include, for example, honeybees in this category.

Obviously, if one makes the demand that sensory states interact with higher level states in order to be conscious, then one can espouse either an actualist or dispositionalist version of the demand. Tye opts for the dispositionalist account. One of the conditions of the PANIC theory is that the content of sensory states must be 'poised' or available to the higher level cognitive states (recall that the other conditions are that the content must be abstract, non-conceptual and intentional). Tye says that such states 'stand ready and available to make a direct difference to beliefs and desires' (p. 172). In Tye (1995) the discussion of poised content is even more explicitly dispositionalist; there Tye says that 'to claim that the contents ... must be poised is to be understood as requiring that these contents ... stand ready and in position to make a direct impact on the belief/desire system. To say that the contents stand ready in this way is not to say that they always do have such an impact' (1995, p. 138). Poised content need not actually create or modify any beliefs or desires in order to be conscious.

Dispositionalist accounts of the conditions of consciousness face an immediate and I think serious, if rather abstract and 'purely philosophical', objection. Take a subject, S1, and consider the set of PANIC states of S which are conscious but which in fact have no effect on any higher level cognition. While these states have all sorts of effects on S1 and S1's behaviour they are conscious solely in virtue of their unexercised dispositions to affect high level cognition. Now, take a second subject, S2, who is identical to S1 save that S2 has been modified by the attaching of a device that would block the relevant disposition - that is, make the content unavailable to higher level cognition, but only for those states which in fact are not going to affect higher level cognition. Note that this device will never have to do anything in S2 (it will only operate, so to speak, in counterfactual situations). There will be absolutely no difference in the neural processes of S1 and S2, nor in their behaviour. The only difference is that S2's brain has within it a totally inert disposition blocking device. Nonetheless, Tye's theory asserts that S2 will have quite different states of consciousness compared to S1. This seems to me extremely implausible. S1 and S2 will behave exactly the same way and they will have exactly the same neural processes at work within their brains. They will even have exactly the same representational states active within them and active in exactly the same way in each of them. Yet S1 supposedly has many more states of consciousness than S2.

It might be replied that S2's brain is 'abnormal' because of the attached device. This kind of abnormality seems irrelevant. Imagine a similar device affixed to S1's brain which is missing its battery, so it cannot operate. S1's brain is now 'abnormal' in the same was as S2's, since after all the device in S2's brain will as a matter of fact never operate. Surely attaching such an inoperable device to S1 will not alter S1's consciousness in any way at all. Why should attaching an almost exactly similar device save that it could but in fact will not work make such a huge difference in consciousness?

This objection stems from and supports the intuitively attractive idea that consciousness is an occurrent phenomenon which depends only upon the current state of the subject, and of course there have been attacks on this intuition (for example, Dretske's famous or infamous denial of consciousness to Swampman). To my mind, the intuition seems on a sounder footing than the theories that deny it.

In any case, the dispositional aspect of poised content must be distinguished from introspectibility. There are, to be sure, examples of people who don't 'notice' their own states of consciousness. But this point is a point about the difference between consciousness and introspection. First order theories have an easier time explaining the difference between mere consciousness and introspection than do the higher order theories, since the latter explicitly invoke a higher order mental state which is about the lower order, conscious, state and it is tempting to equate such a higher order state with introspective access to the conscious state. Thus higher order theories have to impose a distinction upon the set of higher order states which divides them into the merely 'consciousness creating' and 'introspective access providing' states. First order theories can, so to speak, borrow the machinery of higher order theories as the basis of their theory of introspection without modifying their theory of consciousness itself.

Thus, introspection is the having of thoughts about one's states of consciousness. But mere consciousness itself requires only the sort of interaction with high level cognition adumbrated above. One example much discussed by all these theories is that of the distracted driver. This is the phenomenon, of which the reader very likely has first hand experience, of suddenly discovering that one has seemingly not been paying any attention to driving one's car for a disturbingly long period of time. This phenomenon is subject to a variety of interpretations. But it is pretty clear that one has not been introspectively aware of one's sensory states during the period of distraction (only 'pretty clear' since one could imagine that there has been rapid memory loss of such introspective awareness as one drives, but this seems to me highly unlikely as it tends to imply, or at least suggest, that all consciousness is, or is associated with, introspective consciousness, and that just seems wrong). But are distracted drivers unconscious of the sensory information which must be operating within their cognitive systems? Not according to Tye. The sensory states which represent the configuration of the road ahead as well as the orientation of the steering wheel (and many other relevant facts) seem to be doing their usual job of affecting the (short term and unremembered) beliefs one has about where and how to drive, given the standing desire not to exit the roadway, crash and burn.

So these sensory states are not merely poised to affect belief, they are actively affecting beliefs as the driver negotiates the roadway. Tye goes on to say that the sensory representations provide input to higher level cognitive systems 'whose job it is to produce beliefs (or desires) directly from the appropriate nonconceptual representations, if attention is properly focussed and the appropriate concepts are possessed' (1995, p. 138). Once again, while it is possible to read this as suggesting that attention is to be fixed upon the sensory states, this possibility ought to be resisted, since it equates consciousness with introspection.

It is not easy to think of an example of merely poised sensory content. Perhaps an example that Tye uses in another context will do. This is the example of Mary (p. 14ff.) who is so distracted by her thoughts that she does not notice the rose placed before her (even though she has never so much as seen a colour before). No beliefs or desires are formed on the basis of the sensory representations which, Tye asserts, are brought about by the rose. Although, yet again, Tye's discussion proceeds in terms of the nature of Mary's sensory experiences being available to introspection, let us continue to take it that it is not required for a state to be conscious that it be available for introspective awareness. Otherwise, the distinction between Tye's theory and Carruthers's dispositionalist higher order thought theory of consciousness would seem simply to collapse. Tye's theory can of course allow that conscious sensory content is normally available to introspective awareness in creatures, such as adult human beings, with the rather complex conceptual apparatus necessary for engaging in introspection, but his theory maintains a principled and I think essential distinction between introspective awareness and awareness tout court.

So it is not the case that it is the connection between sensory states and introspective beliefs that makes the sensory states conscious states. We may then ask if there are other restrictions on the sorts of beliefs necessary for sensory states to be conscious states. It seems that there are. Suppose there is an ANIC state S - a state, that is, which meets Tye's conditions save for that of being poised such that whenever S occurs I directly am caused to have the (occurrent) belief that the activity level of my insular cortex has increased by 10% and let it be that S carries nonconceptual content about the level of activity of my insular cortex so that these beliefs are generally true. But I have no sensation of my insular cortex becoming more active; I just suddenly think that it is more active. This is not a case of phenomenal consciousness (although it is certainly a state of consciousness, an instance of conscious thought). This kind of direct link between sensory representation and belief is not the right sort to generate phenomenal consciousness. Real world examples of something like this imaginary case are not hard to come by. Many experiments have shown that sensory stimuli that are presented for too short a time for one to be conscious of them nonetheless have cognitive effects (see for example Kunst-Wilson, W. R., & Zajonc, R. B. (1980) or Murphy, S. T., & Zajonc, R. B. (1993)). In the latter experiment Chinese ideographs were presented to non-Chinese reading subjects who were to decide whether the ideograph represented a 'good' or 'bad' concept. Before presentation of the ideograph a human face was presented for an exceedingly short time of 4 msec. The face was either angry or happy. The expressed affect of the face influenced the subject's beliefs about the ideograph, but without consciousness of the faces. One might perhaps object that the subjects weren't forming real beliefs here, but this objection assumes too stringent a condition upon the cognitive states produced by the sensory states. The subjects of these experiments certainly were forming opinions and otherwise engaging high level cognitive function, and obviously there are plenty of conscious experiences that provoke in us only greater or lesser probabilities about the way things are.

So what are the right beliefs (or cognitive states)? Mere access to the belief/desire system does not by itself transform nonconceptual content into phenomenal consciousness. The obvious answer is that the sensory representations must be apt to produce beliefs about the perceptible qualities which they themselves represent. If I'm looking at a horse it will not be sufficient for phenomenal consciousness that my sensory representation of the horse induce in me the belief that there is a horse before me but rather, at minimum, a belief of the form 'a horse that looks like .... is before me'. But what fills in the dots here? No description will do since I can come to have beliefs about horses that meet such and such descriptions without having any states of phenomenal consciousness. We know that you could just tell me the description of how a horse that's in front of me looks and convince me it is correct (I could be blindfolded) and I could thereby come to believe that there was a horse in front of me meeting that description without enjoying any visual consciousness. Furthermore, nothing prevents a mechanism which directly instills such description based beliefs in me when I am in the presence of horses so the condition that sensory content be a 'direct influence' on one's cognitive system will not solve this problem. The fact that I could acquire this belief 'directly' from my sensory system - as in the imaginary and actual examples above - goes nowhere towards showing that phenomenal consciousness of a horse will attend the creation of the belief.

If you will forgive me, the point can be illustrated by an anecdote of a curious experience I had last summer. I was in an extremely quiet place overlooking the sea, enjoying the stillness and the few faint sounds of wind, rustling branches, birds etc. when it suddenly struck me - and this is the only way I can describe this uncanny experience - that I was soon going to hear something. I was not conscious of any new sound but had the rather uneasy feeling that a sound was 'coming'. Sure enough, a short while after I could hear the distant droning of a small airplane making its way up the coast. This is an example of what has to be a state with content which is poised, in the sense of being able to 'directly' influence my cognitive system, but which was not a phenomenally conscious state. Perhaps it is possible to deny this, and to assert instead that this was a case of conscious aural experience which was 'merely' unavailable to introspection. This seems very implausible to me. There seems to be a viable distinction between a kind of pre-conscious deployment of content within our cognitive systems which is not recognised by Tye's theory unless we impose a distinction within the field of poised content. Thus there is a serious question about exactly what kind of poised content is 'appropriate' for the generation of consciousness. I would suggest that if this problem is genuine, it is at bottom another instance of the hard-problem or the explanatory gap.

Tye goes to a lot of trouble (ch. 2) in an effort to show that the gap is merely an illusion. His diagnosis is basically that there can be no demand for an explanation for why an identity claim holds; if x = y it is senseless to ask why x is identical to y. But for this to work we need a good candidate for the physical states that are to be identified with states of consciousness. The problem of the last paragraph reveals that poised (+ ANI) content is not this candidate since only some of this content actually underlies states of consciousness. The explanatory gap surely arises again when we're forced to limit ourselves to the entirely trivial remark that conscious states are identical to those physical states that are identical to conscious states but we have no way to telling what the relevant physical difference is between the conscious and non-conscious candidates for the identity. It always remains possible, I suppose, that we will eventually discover the candidate that maps perfectly onto conscious experience but even in that case there are constraints on what would count as explaining consciousness. To take a ridiculous case, suppose only PANIC states whose canonical English expression has a prime number of vowels generate states of consciousness. Although we could claim to have, we might really have, discovered the correct identity claim between physical and conscious states, this would hardly bridge the explanatory gap.

So let us return to the question of how to give a non-trivial characterization of the contents of the cognitive states necessary for the occurrence of conscious experience. Given the foregoing, the only content that might seem able to do the job here is the nonconceptual content of the sensory representation itself. One fairly natural proposal would be that only those PANIC states tending to produce beliefs (or other cognitive states) about those very states's contents suffice for conscious experience. But, as we've seen above, this proposal just collapses the difference between Tye's theory and the (dispositional) HOT theory of consciousness. Perhaps that is, in the end, simply what Tye's theory amounts to. This would be of some interest given that Tye and Carruthers seem to think that their theories are quite distinct.

A worse problem looms though. Beliefs cannot, by their very nature, contain nonconceptual content (although of course they can represent such content). Tye is quite clear on this restriction, for he says that conscious experiences 'arise at the interface of the nonconceptual and conceptual domains' (p. 62). And, as we've seen, merely representing the nonconceptual content will not guarantee phenomenal consciousness. Call this the 'uptake problem': the problem of taking up the nonconceptual content of the sensory representations into the conceptual states of belief. It might seem that Tye introduces the notion of phenomenal concepts to solve this problem. But this does not actually seem to be the case. Phenomenal concepts are introduced as 'the concepts utilized when a person introspects his or her phenomenal state and forms a conception of what it is like for him or her at that time' (p. 25). A conception of what is like for a subject is a concept deployed in introspection, not a concept deployed in mere conscious awareness of the world around one. One does not need to have phenomenal concepts to have conscious perception. Tye is very liberal about allowing that animals, including some insects, have phenomenal consciousness, but he surely does not wish to grant them any introspective abilities or beliefs about what it is like to be them. Therefore such creatures will not possess any phenomenal concepts but there will nonetheless be something it is like to be one of them - they will enjoy phenomenal consciousness. But why? As we've seen, the fact that their nonconceptual sensory states can affect these creatures' belief/desire system does not entail that they are phenomenally conscious.

A quick and dirty answer to the uptake problem is simply to deny that there is any uptake of nonconceptual content into belief. Rather, what matters is that the linkage between the sensory contents and the belief/desire system is 'of the right kind to generate conscious experience'. One could talk here, though rather vaguely and unsatisfactorily, of there being 'enough' and 'sufficiently intimate' interaction between 'sufficiently complex', ongoing and continually changing sensory contents and similarly changing beliefs and desires. Tye's discussion of blindsight is instructive here. In short, the reason why blindsighters are not phenomenally visually conscious is supposed to be that 'there is no complete, unified representation of the visual field, the content of which is poised to make a direct difference in beliefs' (p. 63). The general question to ask here is why completeness and unity matter to phenomenal consciousness? There are neurological syndromes which seem to involve incomplete and disunified visual fields that do not destroy phenomenal consciousness. One example is that of visual neglect in which the subject is only conscious of half of the visual field. This is a case of an incomplete and disunified visual field without loss of visual phenomenal consciousness. It is easy to reply that in this case there is a complete and unified representation of half the visual field (which shifts around continuously with the attention of the subject). But then why don't blindsighters have complete and unified representations of 'small bits' of their visual fields, and why wouldn't this be sufficient for phenomenal consciousness? There is an obvious worry of circularity lurking here: the definition of the relevant senses of 'unified' and 'complete' can only be given in terms of consciousness itself. The point is that talk of 'complete and unified' representations of visual fields disguises a mystery: what turns some representations into conscious experiences while other representations fail to generate consciousness? This question is, of course, the good old explanatory gap reappearing again. Some kinds of poised content are sufficient for phenomenal consciousness but others are not. Why?

The problems discussed thus far centre on doubts about whether poised content is sufficient for phenomenal consciousness. It is also unclear whether poised content is necessary for phenomenal consciousness. The problem here is that the activation of phenomenal concepts would seem to be enough to generate a kind of phenomenal experience in the absence of any PANIC. At least, this is so if it is possible for phenomenal concepts to be active or 'applied' in the absence of what they properly apply to, a possibility which is undeniable for concepts in general, and may be a constituent feature of the concept of a concept, namely that they support an appearance/reality distinction. Normally, phenomenal concepts are applied to the non-conceptual contents of certain representations of either the world or the body. For example, in pain there is a non-conceptual representation of a part of the body as having certain features. Knowing what pain feels like requires the deployment of the phenomenal concept of pain (or the phenomenal concept of pain's feel) in the categorization of these non-conceptual contents. Presumably, Tye would endorse some tale of the neurological realization of concept application and thus there seems to be nothing preventing the 'misapplication' of phenomenal concepts if this neural realization should occur inappropriately, that is, in the absence of the appropriate non-conceptual content. Of course, such misapplication would be decidedly abnormal, but we can imagine a science fictional philosophical thought experiment in which the neurological structure of phenomenal concepts is so well understood that a machine can be constructed which can directly 'turn on' the realization of an application of a phenomenal concept, in the complete absence of any appropriate non-conceptual content. Imagine that the machine works such that when we ask someone to introspect the state of their left arm, say, it activates a particular phenomenal concept for extreme pain in the left arm. There is no reason to believe that every mechanism which can activate the neural machinery underlying the application of a phenomenal concept must involve a non-conceptual representation of what that phenomenal concept represents. After all, it is obvious that the signals from our imaginary machine do not normally covary with a state of a damaged left arm. They in fact are supposed to be nothing but a kind of list of which neural structures within the conceptual machinery are to be turned on and which turned off so as to produce the 'neural activation vector' which corresponds to the application of that phenomenal concept, along with a device which actually turns these neural structures off or on.

This is important enough to belabour. According to Tye's theory, in a normal case of introspective knowledge of what pain feels like we have two components: the non-conceptual representation of the pain (the PANIC) and the application of the phenomenal concept of pain which yields knowledge. It is explicitly granted that we can have the former without the latter. It seems equally obvious that it is physically possible that the structures that subserve application of phenomenal concept of pain could be activated in the absence of any appropriate PANIC. The imaginary machine we have envisaged brings about this latter state.

But the important question is whether there is anything it is like to be in this peculiar state. The unfortunate victims of our machine will sincerely believe that they are experiencing excruciating agony, and they will be able to describe the 'pain' in detail (since phenomenal concepts have tremendous 'fineness of grain', specificity of modality, etc.). There seems to be a clear sense in which they are indeed feeling pain. In fact, such people would seem to meet Tye's definition of 'suffering' (see above) and it is hard to imagine genuine suffering without some kind of phenomenal consciousness.

So it appears that phenomenal consciousness can exist without there being any poised content and that poised content can exist without there being phenomenal consciousness. Without taking anything away from the interest, fruitfulness and indeed the elegance of Tye's representational theory, there still seems to be a kind of explanatory gap between representation and consciousness.

References

Block, Ned (1990). 'Inverted Earth', in J. Tomberlin (ed.) Philosophical Perspectives, vol. 4, pp. 53-79.

Carruthers, Peter (2000). Phenomenal Consciousness, Cambridge: Cambridge University Press.

Dretske, Fred (1995). Naturalizing the Mind, Cambridge, MA: MIT Press.

Foss, Jeffrey (2000). Science and the Riddle of Consciousness: A Solution, Boston: Kluwer Academic Publishers.

Kunst-Wilson, W. R. & Zajonc, R. B. (1980). 'Affective discrimination of stimuli that cannot be recognized', in Science, 207, pp. 557-558.

Lycan, William (1996). Consciousness and Experience, Cambridge, MA: MIT Press.

Murphy, S. T. & Zajonc, R. B. (1993). 'Affect, cognition, and awareness: Affective priming with optimal and suboptimal stimulus exposures', in the Journal of Personality and Social Psychology, 64, pp. 723-739.

Rosenthal, David (1986). 'Two Concepts of Consciousness,' Philosophical Studies, 49, pp. 329-59.

Seager, William (2000). 'Introspection and the Elementary Acts of Mind', Dialogue, 39, 1, pp. 53-76.

Tye, Michael (1995). Ten Problems of Consciousness, Cambridge, MA: MIT Press.

Tye, Michael (2000). Consciousness, Color and Content, Cambridge, MA: MIT Press.

top | back to symposium index | Tye's reply