Symbol Grounding grounding logo
Back to Homepage
   
Sections of Chalmers’ bibliography relevant to Symbol Grounding Part IV: Philosophy of Artificial Intelligence
4.1 Can Machines Think?
4.1a The Turing Test
4.1c The Chinese Room
4.2 Computation and Representation
4.2a Symbols and Symbol Systems
4.6 Computationalism in Cognitive Science

 
Related Topics: chinese room argument, cognitive modelling, computationalism, connectionism, hermeneutics, intentionality, referential competence, other minds problem, robotics, situatedness/embeddedness, Turing test.



1.  Stevan Harnad's papers on the symbol grounding problem
 
1990

The symbol grounding problem
Physica D  42: 335-346.
 

1991

Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem
Minds and Machines 1: 43-54.
 

Connecting object to symbol in modeling cognition
In: A. Clark and  R. Lutz (eds.). Connectionism in Context. Springer-Verlag, pp. 75-90.
 
1993
Grounding Symbols in the Analog World with Neural Nets
Think 2(1): 12-78 (Special Issue on "Connectionism versus Symbolism" D.M.W. Powers & P.A. Flach, eds.). [Also reprinted in French translation as "L'Ancrage des Symboles dans le Monde Analogique a l'aide de Reseaux Neuronaux: un Modele Hybride." In: Rialle V. et Payette D. (Eds) La Modelisation. LEKTON, Vol IV, No 2.]
  Symbol Grounding is an Empirical Problem: Neural Nets are Just a Candidate Component
In: Proceedings of the Fifteenth Annual Meeting of the Cognitive Science Society. NJ: Erlbaum Problems, Problems: The Frame Problem as a Symptom of the Symbol Grounding Problem
PSYCOLOQUY 4(34) frame-problem.11.
1994
Levels of Functional Equivalence in Reverse Bioengineering: The Darwinian Turing Test for Artificial Life
Artificial Life 1(3): 293-301 (reprinted in: C.G. Langton (Ed.). Artificial Life: An Overview. MIT Press 1995).
  Computation Is Just Interpretable Symbol Manipulation: Cognition Isn't
Special Issue on "What Is Computation"  Minds and Machines 4:379-390 [Also appears in French translation in "Penser l'Esprit: Des Sciences de la Cognition a une Philosophie Cognitive," V. Rialle & D. Fisette, Eds. Presses Universite de Grenoble. 1996]
 
1995
Grounding symbols in sensorimotor categories with neural networks
IEE Colloquium "Grounding Representations: Integration of Sensory Information in Natural Language Processing, Artificial Intelligence and Neural Networks" (Digest No.1995/103). ftp://ftp.princeton.edu/pub/harnad/Harnad/HTML/harnad95.iee.html Does the Mind Piggy-Back on Robotic and Symbolic Capacity?
In: H. Morowitz (ed.).  The Mind, the Brain, and Complex Adaptive Systems. Santa Fe Institute Studies in the Sciences of Complexity. Volume XXII. P. 204-220. Grounding Symbolic Capacity in Robotic Capacity
In: L. Steels and R. Brooks (eds.). The Artificial Life Route to Artificial Intelligence: Building Situated Embodied Agents. Hillsdale (NJ): LEA. Pp. 277-286.

Learned Categorical Perception in Neural Nets: Implications for Symbol Grounding
(with S.J. Hanson and J. Lubin)
In: J. V. Honavar and  L. Uhr (eds). Symbol Processors and Connectionist Network Models in Artificial Intelligence and Cognitive Modelling: Steps Toward Principled Integration. New York: Academic Press. Pp. 191-206.

1996
The Origin of Words: A Psychophysical Hypothesis
In: W.  Durham and B. Velichkovsky (Eds.). Communicating Meaning: Evolution and Development of Language. Hillsdale (NJ): LEA.
  On the Virtues of Theft Over Honest Toil: Grounding Language and Thought in Sensorimotor Categories
Paper presented at Hang Seng Centre Conference on Language and Thought, Sheffield University, June 1996. ftp://ftp.princeton.edu/pub/harnad/Harnad/HTML/harnad96.language.theft.html
1998
The Adaptive Advantage of Symbolic Theft Over Sensorimotor Toil: Grounding Language in Perceptual Categories  (with A. Cangelosi)
Paper presented at the Second International Conference on the Evolution of Language, London, April 1998. To appear in volume edited by C. Knight and J. Hurford. ftp://ftp.princeton.edu/pub/harnad/Harnad/HTML/harnad98.theft.toil.html


2.  Literature on the symbol grounding problem.

Christiansen, M.H. & Chater, N. (1992). Connectionism, meaning and learning. Connection Science  4: 227-252.

There is an apparent anomaly in the notion that connectionism, which is fundamentally a new technology, has considerable philosophical significance. Nonetheless, connectionism has been widely viewed as having implications for symbol grounding, notions of structured representation and compositionality, as well as the issue of nativism. In this paper, the authors consider each of these issues in detail, and find that the current state of connectionism does not warrant the magnitude of many of the philosophical conclusions drawn from it. They  argue that connectionist models are no more "grounded'' than their classical counterparts. In addition, since connectionist representations typically are ascribed content through semantic interpretation based on correlation, connectionism is prone to a number of well-known philosophical problems facing any kind of correlational semantics. However, the authors suggest that philosophy may be ill-advised to ignore the development of connectionism,  particularly if connectionist systems prove to be able to learn to handle structured representations.
 
Cummins, Robert (1996). Why There Is No Symbol Grounding Problem, Chapter 9 of: Representations, Targets, and Attitudes. Mit Press.

Dorffner, G., Prem, E. & Trost, H. (1993)  Words, Symbols, and Symbol Grounding, Österreichisches Forschungsinstitut für Artificial Intelligence, Wien, TR-93-30.
 [available online at ftp://ftp.ai.univie.ac.at/papers/oefai-tr-93-30.ps.z]

Dyer, M. (1990).  Intentionality and computationalism: minds, machines, Searle and Harnad. Journal of Experimental and Theoretical Artificial Intelligence 2: 303-19.
 

 Reply to Searle/Harnad: systems reply, level confusions, etc.
 
 
Dyer, M. (1990), Finding lost minds. Journal of Experimental and Theoretical Artificial Intelligence 2: 329-39.  
Reply to Harnad (1990): symbols, other minds, physically embodied algorithms.
 
 
Fetzer, J.H. (1995).  Minds and machines: Behaviorism, dualism and beyond. In: S. Franchi and G. Güzeldere (eds.), Constructions of the Mind: Artificial Intelligence and the Humanities. Stanford Humanities Review  4(2).
  Includes commentaries on several  Harnad's papers.
 
 
Frixione, M. & Spinelli, G. (1992) Connectionism and functionalism: the importance of being a subsymbolist. Journal of Experimental and Theoretical Artificial Intelligence 1: 3-17.
  The authors suggest that the problem of modelling the reference of mental symbols from a cognitive point of view requires the abandonment of a purely symbolic approach, and the adoption of a subsymbolic level of representation. Some philosophical conseguences of a subsymbolic level of this kind are discussed. After distinguishing between the problem of reference and that of intentionality (which cannot be solved positing a subsymbolic level of representation), the authors show how a subsymbolic approach can be compatible with a functionalist view of the mind, in the wider sense. Finally, some conseguences of subsymbolic models of reference regarding the problem of the inverted spectrum are described.
 
 
Frixione, M. (1994). Logica, significato e intelligenza artificiale (Logic, Meaning, and Artificial Intelligence). Rome: Angeli.
  The referential competence problem (link:  Marconi 1997) is another version of the symbol grounding problem. So the Total Turing Test is the criterion of a referentially competent system.
 
Hauser, L. (1993).  Reaping the Whirlwind: Reply to Harnad's Other Bodies, Other MindsMinds and Machines  3 (2): 219-238.
  Harnad's (link: Harnad 1991) proposed "robotic upgrade" of Turing's Test (TT), from a test of linguistic capacity alone to a Total Turing Test (TTT) of linguistic and sensorimotor capacity, conflicts with his claim that no behavioral test provides even probable warrant for attributions of thought because there is "no evidence" [p. 45] of consciousness besides "private experience" [p. 52]. Intuitive, scientific, and philosophical considerations Harnad offers in favor of his proposed upgrade are unconvincing. I agree with Harnad that distinguishing real from "as if" thought on the basis of (presence or lack of) consciousness (thus rejecting Turing (behavioral) testing as sufficient warrant for mental attribution) has the skeptical consequence Harnad accepts -- "there is in fact no evidence for me that anyone else but me has a mind" [p. 45]. I disagree with his acceptance of it! It would be better to give up the neo-Cartesian "faith" [p. 52] in private conscious experience underlying Harnad's allegiance to Searle's controversial Chinese Room "Experiment" than give up all claim to know others think. It would be better to allow that (passing) Turing's Test evidences -- even strongly evidences -- thought.
 
 
Hayes, P., Harnad, S., Perlis, D. & Block, N. (1992). Virtual Symposium on Virtual Mind. Minds and Machines 2(3) 217-238.
  When certain formal symbol systems (e.g., computer programs) are implemented as dynamic physical symbol systems (e.g., when they are run on a computer) their activity can be interpreted at higher levels (e.g., binary code can be interpreted as LISP, LISP code can be interpreted as English, and English can be interpreted as a meaningful conversation). These higher levels of interpretability are called "virtual" systems. If such a virtual system is interpretable as if it had a mind, is such a "virtual mind" real? This is the question addressed in this "virtual" symposium, originally conducted electronically among four cognitive scientists: Donald Perlis, a computer scientist, argues that according to the computationalist thesis, virtual minds are real and hence John Searle's Chinese Room Argument fails, because if Searle memorized and executed a program that could pass the Turing Test in Chinese he would have a second, virtual, Chinese-understanding mind of which he was unaware (as in multiple personality). Stevan Harnad, a psychologist, argues that Searle's Argument is valid, virtual minds are just hermeneutic overinterpretations, and symbols must be grounded in the real world of objects, not just the virtual world of interpretations. Computer scientist Patrick Hayes argues that Searle's Argument fails, but because Searle does not really implement the program: A real implementation must not be homuncular but mindless and mechanical, like a computer. Only then can it give rise to a mind at the virtual level. Philosopher Ned Block suggests that there is no reason a mindful implementation would not be a real one.
 
Jackson, S.A. (1994). Grounding or association? Connection Science 6: 120-122.  
Commentary on Nenov & Dyer (1994).
 
 
MacDorman, K. F. (1997). Symbol grounding: Learning categorical and sensorimotor predictions for coordination in autonomous robots. Technical Report No. 423. Computer Laboratory, Cambridge University (e-mail: librarian@cl.cam.ac.uk).
  To act intelligently, agents must be able to adapt to changing behavioural possibilities. This dissertation proposes a model that enables them to do this. An agent learns sensorimotor predictions from spatiotemporal correlations in sensory projections, motor signals, and physiological variables. Currently elicited predictions constitute its model of the world.
 
MacDorman, K. F. (1997). How to ground symbols adaptively. In: S. O' Nuillain, P. McKevitt, E. MacAogain (eds.). Two Sciences of Mind John Benjamins.
  The first section takes up the question of how symbols are to be grounded in sensory projections by comparing alternative approaches to grounding symbols. Although innate feature detectors may contribute to low-level sensory processing, by themselves they are probably insufficient to ground the vast numbers of symbols that would be required to represent all the different kinds of potentially recognizable things. Some form of empirical adaptation seems necessary. In the second section, it is argued that from an evolutionary standpoint basic categorical representations must, in the first instance, be related to sensorimotor coordination. An illustration follows showing how this mapping can be learned. The last section proposes a model for adaptively discovering relevant categories in a simulated environment. It also discusses how this model might be extended to learn the sorts of higher-level behaviors that are typically identified with symbolic planning.
 
MacDorman, K. F. (1998). Feature learning, multiresolution analysis, and symbol grounding.  A peer commentary on Schyns, Goldstone, and Thibaut's 'The development of features in object concepts'. Behavioral and Brain Sciences.
  Cognitive theories based on a fixed feature set suffer from the frame and symbol grounding problems. Flexible features and other empirically acquired constraints (e.g., analog-to-analog mappings) provide a framework for letting extrinsic relations influence symbol manipulation. By offering a biologically plausible basis for feature learning, nonorthogonal multiresolution analysis and dimensionality reduction, informed by functional constraints, may contribute toward a solution to the symbol grounding problem.
 
Marconi, D. (1997) Referentially competent systems. In: Lexical Competence, chapter 6, MIT Press.
  Presents a dual theory of lexical competence, according to which knowing the meaning of a word is being both inferentially and referentially competent. See Frixione 1997 for the relevance of the symbol grounding problem to Marconi's notion of referential competence.
 
 
Marraffa, M. (forthcoming).  Ancoraggio dei simboli mediante reti neurali: un esame critico (Grounding Symbols with Neural Nets: A Critical Overview).  Il Cannocchiale, 3, 1998.
  An introduction to the debate on the symbol grounding problem with the Italian philosophical audience as a target.  Starting from (link: Marconi 1997) Marconi's theory of lexical competence, works out  (link: C&C 1992) Christiansen and Chater’s ideas on the connectionist symbol grounding, and argues that the (link: Plunkett et al. 1992)  symbol emergence problem is antecedent to the symbol grounding problem.
 
 
Meini, C. & Paternoster, A. (1996). Understanding language through vision. In P. McKevitt (ed.), Integration of Natural Language and Vision, Dordrecht: Kluwer, vol. III.
  Ref. to Marconi's theory of lexical competence.
 
 
Nenov, V.I. & Dyer, M. (1994). Perceptually grounded language learning: Part 2 – DETE: A neural/procedural model. Connection Science 6: 1-40.

Plunkett, K., Sinha, C., Moller, M.F., Strandsby, O. (1992).  Symbol grounding or the emergence of symbols? Vocabulary growth in children and a connectionist net.  Connection Science, 4(3-4), special issue: Philosophical issues in connectionist modelling, A. Clark (ed.), pp. 293-312.

Prem, E. (1994). Symbol grounding revisited. Österreichisches Forschungsinstitut für Artificial Intelligence, Wien, TR-94-19.
 [available online at ftp://ftp.ai.univie.ac.at/papers/oefai-tr-94-19.ps.z]

Prem, E. (1994). Symbol grounding and transcendental logic. Österreichisches Forschungsinstitut für Artificial Intelligence, Wien, TR-94-20.
 [available online at ftp://ftp.ai.univie.ac.at/papers/oefai-tr-93-30.ps.z]

Prem, E. (1995). Dynamic symbol grounding state construction and the problem of teleology. in Mira J. & Sandoval F. (eds.), From Natural to Artificial Neural Computation, Proc. International Workshop on Artificial Neural Networks, Malaga-Torremolinos, Spain, June. Springer, LNCS 930.

Prem, E. (1995). Symbol grounding and transcendental logic. In Niklasson L. & Boden M. (eds.), Current Trends in Connectionism, Lawrence Erlbaum, Hillsdale, NJ, pp. 271-282.

Sales, N.J. and  Evans, R.G. (1989). An approach to solving the symbol grounding problem: neural networks for object naming and retrieval. Proc. of CMC-95, Eindhoven.