Back to Homepage
SYMBOL GROUNDING ENTRY
NB: the below links
are not yet active
Sections of Chalmers’ bibliography
relevant to Symbol Grounding
Harnad's papers on the symbol grounding problem
Journal of Experimental
and Theoretical Artificial Intelligence 1: 5-25.
Searle's celebrated Chinese Room Argument
has shaken the foundations of Artificial Intelligence. Many refutations
have been attempted, but none seem convincing. This paper is an attempt
to sort out explicitly the assumptions and the logical, methodological
and empirical points of disagreement. Searle is shown to have underestimated
some features of computer modeling, but the heart of the issue turns out
to be an empirical question about the scope and limits of the purely symbolic
(computational) model of the mind. Nonsymbolic modeling turns out to be
immune to the Chinese Room Argument. The issues discussed include the Total
Turing Test, modularity, neural modeling, robotics, causality and the symbol-grounding
The symbol grounding
There has been much discussion recently
about the scope and limits of purely symbolic models of the mind and about
the proper role of connectionism in cognitive modeling. This paper describes
the "symbol grounding problem": How can the semantic interpretation of
a formal symbol system be made intrinsic to the system, rather than
just parasitic on the meanings in our heads? How can the meanings of the
meaningless symbol tokens, manipulated solely on the basis of their (arbitrary)
shapes, be grounded in anything but other meaningless symbols? The problem
is analogous to trying to learn Chinese from a Chinese/Chinese dictionary
alone. A candidate solution is sketched: Symbolic representations must
be grounded bottom-up in nonsymbolic representations of two kinds: (1)
"iconic representations," which are analogs of the proximal sensory projections
of distal objects and events, and (2) "categorical representations," which
are learned and innate feature-detectors that pick out the invariant features
of object and event categories from their sensory projections. Elementary
symbols are the names of these object and event categories, assigned on
the basis of their (nonsymbolic) categorical representations. Higher-order
(3) "symbolic representations," grounded in these elementary symbols, consist
of symbol strings describing category membership relations (e.g., "An X
is a Y that is Z"). Connectionism is one natural candidate for the mechanism
that learns the invariant features underlying categorical representations,
thereby connecting names to the proximal projections of the distal objects
they stand for. In this way connectionism can be seen as a complementary
component in a hybrid nonsymbolic/symbolic model of the mind, rather than
a rival to purely symbolic modeling. Such a hybrid model would not have
an autonomous symbolic "module," however; the symbolic functions would
emerge as an intrinsically "dedicated" symbol system as a consequence of
the bottom-up grounding of categories' names in their sensory representations.
Symbol manipulation would be governed not just by the arbitrary shapes
of the symbol tokens, but by the nonarbitrary shapes of the icons and category
invariants in which they are grounded.
Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem
Minds and Machines 1: 43-54.
Explaining the mind by building machines
with minds runs into the other-minds problem: How can we tell whether any
body other than our own has a mind when the only way to know is by being
the other body? In practice we all use some form of Turing Test: If it
can DO everything a body with a mind can do such that we can't tell them
apart, we have no basis for doubting it has a mind. But what is "everything"
a body with a mind can do? Turing's original "pen-pal" version (the TT)
only tested linguistic capacity, but Searle has shown that a mindless symbol-manipulator
could pass the TT undetected. The Total Turing Test (TTT) calls for all
of our linguistic and robotic capacities; immune to Searle's argument,
it suggests how to ground a symbol manipulating system in the capacity
to pick out the objects its symbols refer to. No Turing Test, however,
can guarantee that a body has a mind. Worse, nothing in the explanation
of its successful performance requires a model to have a mind at all. Minds
are hence very different from the unobservables of physics (e.g., superstrings);
and Turing Testing, though essential for machine-modeling the mind, can
really only yield an explanation of the body.
to symbol in modeling cognition
In: A. Clark and
R. Lutz (eds.). Connectionism in Context. Springer-Verlag, pp. 75-90.
Connectionism and computationalism are
currently vying for hegemony in cognitive modeling. At first glance the
opposition seems incoherent, because connectionism is itself computational,
but the form of computationalism that has been the prime candidate for
encoding the "language of thought" has been symbolic computationalism,
whereas connectionism is nonsymbolic, or, as some have hopefully dubbed
it, "subsymbolic"). This paper will examine what is and is not a symbol
system. A hybrid nonsymbolic/symbolic system will be sketched in which
the meanings of the symbols are grounded bottom-up in the system's capacity
to discriminate and identify the objects they refer to. Neural nets are
one possible mechanism for learning the invariants in the analog sensory
projection on which successful categorization is based. "Categorical perception,"
in which similarity space is "warped" in the service of categorization,
turns out to be exhibited by both people and nets, and may mediate the
constraints exerted by the analog world of objects on the formal world
Symbols in the Analog World with Neural Nets
Think 2(1): 12-78 (Special Issue
on "Connectionism versus Symbolism" D.M.W. Powers & P.A. Flach, eds.).
[Also reprinted in French translation as "L'Ancrage des Symboles dans le
Monde Analogique a l'aide de Reseaux Neuronaux: un Modele Hybride." In:
Rialle V. et Payette D. (Eds) La Modelisation. LEKTON, Vol IV, No 2.]
The predominant approach to cognitive
modeling is still what has come to be called "computationalism," the hypothesis
that cognition is computation. The more recent rival approach is "connectionism,"
the hypothesis that cognition is a dynamic pattern of connections and activations
in a "neural net." Are computationalism and connectionism really deeply
different from one another, and if so, should they compete for cognitive
hegemony, or should they collaborate? These questions will be addressed
here, in the context of an obstacle that is faced by computationalism (as
well as by connectionism if it is either computational or seeks cognitive
hegemony on its own): The symbol grounding problem.
Grounding is an Empirical Problem: Neural Nets are Just a Candidate Component
The following commentaries are available,
each with a response:
Boyle, C. Franklin Transduction
and Degree of Grounding
Bringsjord, Selmer People
Are Infinitary Symbol Systems: No Sensorimotor Capacity Neccessary
Dietrich, Eric The
Ubiquity of Computation
Dyer, Michael G. Computationalism,
Neural Networks and Minds, Analog or Otherwise
Fetzer, James H. The
TTT is not the Final Word
Hayes, Pat Computers
Don't Follow Instructions
Honavar, Vasant A
Note on the Symbol Grounding Problem and its Solution
Kentridge, R.W. Computation,
Chaos and Non-Deterministic Symbolic Computation: The Chinese Room Problem
MacLennan, Bruce J. Grounding
McDermott, Drew The
Digital Computer as Red Herring
Powers, David M. W. A
Grounding of Definition
Roitblat, Herbert L. Computational
Searle, John R. The
Failures of Computationalism
An alternative site: article
In: Proceedings of the Fifteenth Annual
Meeting of the Cognitive Science Society. NJ: Erlbaum
"Symbol Grounding" is beginning to mean
too many things to too many people. My own construal has always been simple:
Cognition cannot be just computation, because computation is just the systematically
interpretable manipulation of meaningless symbols, whereas the meanings
of my thoughts don't depend on their interpretability or interpretation
by someone else. On pain of infinite regress, then, symbol meanings must
be grounded in something other than just their interpretability if they
are to be candidates for what is going on in our heads. Neural nets may
be one way to ground the names of concrete objects and events in the capacity
to categorize them (by learning the invariants in their sensorimotor projections).
These grounded elementary symbols could then be combined into symbol strings
expressing propositions about more abstract categories. Grounding does
not equal meaning, however, and does not solve any philosophical problems.
Problems: The Frame Problem as a Symptom of the Symbol Grounding Problem
The former paper includes replies to
the following papers:
Dorffner, G. & E. Prem Connectionism,
symbol grounding, and autonomous agents
Gassner, M. The structure
Brooks, R.A. The engineering
of physical grounding
Christiansen M.H. & N. Chater
Symbol grounding – The emperor’s new theory of meaning?
What is the relationship between cognitive
theories of symbol grounding and philosophical theories of meaning? In
this paper the authors argue that, although often considered to be fundamentally
distinct, the two are actually very similar. Both set out to explain how
non-referring atomic tokens or states of a system can acquire status as
semantic primitives within that system. In view of this close relationship,
the authors consider what attempts to solve these problems can gain from
each other. They argue that, at least presently, work on symbol grounding
is not likely to have an impact on philosophical theories of meaning. On
the other hand, the authors suggest that the symbol grounding theorists
have a lot to learn from their philosophical counterparts. In particular,
the former must address the problems that have been identified in attempting
to formulate philosophical theories of reference.
Lakoff, G. Grounded concepts
Touretzky, D.S. The heart of symbols:
Why symbol grounding is irrelevant
PSYCOLOQUY 4(34) frame-problem.11.
of Functional Equivalence in Reverse Bioengineering: The Darwinian Turing
Test for Artificial Life
Artificial Life 1(3): 293-301 (reprinted
in: C.G. Langton (Ed.). Artificial Life: An Overview. MIT Press
Both Artificial Life and Artificial Mind
are branches of what Dennett has called "reverse engineering": Ordinary
engineering attempts to build systems to meet certain functional specifications,
reverse bioengineering attempts to understand how systems that have already
been built by the Blind Watchmaker work. Computational modelling (virtual
life) can capture the formal principles of life, perhaps predict and explain
it completely, but it can no more be alive than a virtual forest
fire can be hot. In itself, a computational model is just an ungrounded
symbol system; no matter how closely it matches the properties of what
is being modelled, it matches them only formally, with the mediation of
an interpretation. Synthetic life is not open to this objection, but it
is still an open question how close a functional equivalence is needed
in order to capture life. Close enough to fool the Blind Watchmaker is
probably close enough, but would that require molecular indistinguishability,
and if so, do we really need to go that far?
Is Just Interpretable Symbol Manipulation: Cognition Isn't
Special Issue on "What Is Computation"
Minds and Machines 4:379-390 [Also appears in French translation
in "Penser l'Esprit: Des Sciences de la Cognition a une Philosophie Cognitive,"
V. Rialle & D. Fisette, Eds. Presses Universite de Grenoble. 1996]
Computation is interpretable symbol manipulation.
Symbols are objects that are manipulated on the basis of rules operating
only on the symbols' shapes , which are arbitrary in relation to what they
can be interpreted as meaning. Even if one accepts the Church/Turing Thesis
that computation is unique, universal and very near omnipotent, not everything
is a computer, because not everything can be given a systematic interpretation;
and certainly everything can't be given every systematic interpretation.
But even after computers and computation have been successfully distinguished
from other kinds of things, mental states will not just be the implementations
of the right symbol systems, because of the symbol grounding problem: The
interpretation of a symbol system is not intrinsic to the system; it is
projected onto it by the interpreter. This is not true of our thoughts.
We must accordingly be more than just computers. My guess is that the meanings
of our symbols are grounded in the substrate of our robotic capacity to
interact with that real world of objects, events and states of affairs
that our symbols are systematically interpretable as being about.
Grounding symbols in sensorimotor categories
with neural networks
IEE Colloquium "Grounding Representations:
Integration of Sensory Information in Natural Language Processing, Artificial
Intelligence and Neural Networks" (Digest No.1995/103). ftp://ftp.princeton.edu/pub/harnad/Harnad/HTML/harnad95.iee.html
the Mind Piggy-Back on Robotic and Symbolic Capacity?
In: H. Morowitz (ed.). The Mind,
the Brain, and Complex Adaptive Systems. Santa Fe Institute Studies
in the Sciences of Complexity. Volume XXII. P. 204-220.
Cognitive science is a form of "reverse
engineering" (as Dennett has dubbed it). We are trying to explain the mind
by building (or explaining the functional principles of) systems that have
minds. A "Turing" hierarchy of empirical constraints can be applied to
this task, from t1, toy models that capture only an arbitrary fragment
of our performance capacity, to T2, the standard "pen-pal" Turing Test
(total symbolic capacity), to T3, the Total Turing Test (total symbolic
plus robotic capacity), to T4 (T3 plus internal [neuromolecular] indistinguishability).
All scientific theories are underdetermined by data. What is the right
level of empirical constraint for cognitive theory? I will argue that T2
is underconstrained (because of the Symbol Grounding Problem and Searle's
Chinese Room Argument) and that T4 is overconstrained (because we don't
know what neural data, if any, are relevant). T3 is the level at which
we solve the "other minds" problem in everyday life, the one at which evolution
operates (the Blind Watchmaker is no mind-reader either) and the one at
which symbol systems can be grounded in the robotic capacity to name and
manipulate the objects their symbols are about. I will illustrate this
with a toy model for an important component of T3 -- categorization --
using neural nets that learn category invariance by "warping" similarity
space the way it is warped in human categorical perception: within-category
similarities are amplified and between-category similarities are attenuated.
This analog "shape" constraint is the grounding inherited by the arbitrarily
shaped symbol that names the category and by all the symbol combinations
it enters into. No matter how tightly one constrains any such model, however,
it will always be more underdetermined than normal scientific and engineering
theory. This will remain the ineliminable legacy of the mind/body problem.
Symbolic Capacity in Robotic Capacity
In: L. Steels and R. Brooks (eds.). The
Artificial Life Route to Artificial Intelligence: Building Situated Embodied
Agents. Hillsdale (NJ): LEA. Pp. 277-286.
Categorical Perception in Neural Nets: Implications for Symbol Grounding
(with S.J. Hanson and J. Lubin)
In: J. V. Honavar and L. Uhr (eds).
Symbol Processors and Connectionist Network Models in Artificial Intelligence
and Cognitive Modelling: Steps Toward Principled Integration. New York:
Academic Press. Pp. 191-206.
After people learn to sort objects into
categories they see them differently. Members of the same category look
more alike and members of different categories look more different. This
phenomenon of within-category compression and between-category separation
in similarity space is called categorical perception (CP). It is exhibited
by human subjects, animals and neural net models. In backpropagation nets
trained first to auto-associate 12 stimuli varying along a one-dimensional
continuum and then to sort them into 3 categories, CP arises as a natural
side-effect because of four factors: (1) Maximal interstimulus separation
in hidden-unit space during auto-association learning, (2) movement toward
linear separability during categorization learning, (3) inverse-distance
repulsive force exerted by the between-category boundary, and (4) the modulating
effects of input iconicity, especially in interpolating CP to untrained
regions of the continuum. Once similarity space has been "warped" in this
way, the compressed and separated "chunks" have symbolic labels which could
then be combined into symbol strings that constitute propositions about
objects. The meanings of such symbolic representations would be "grounded"
in the system's capacity to pick out from their sensory projections the
object categories that the propositions were about.
Origin of Words: A Psychophysical Hypothesis
In: W. Durham and B. Velichkovsky
(Eds.). Communicating Meaning: Evolution and Development of Language.
Hillsdale (NJ): LEA.
It is hypothesized that words originated
as the names of perceptual categories and that two forms of representation
underlying perceptual categorization, iconic and categorical representations,
served to ground a third, symbolic, form of representation. The
third form of representation made it possible to name and describe our
environment, chiefly in terms of categories, their memberships, and their
invariant features. Symbolic representations can be shared because they
are intertranslatable. Both categorization and translation are approximate
rather than exact, but the approximation can be made as close as we wish.
This is the central property of that universal mechanism for sharing descriptions
that we call natural language.
On the Virtues of Theft Over Honest Toil:
Grounding Language and Thought in Sensorimotor Categories
Paper presented at Hang Seng Centre Conference
on Language and Thought, Sheffield University, June 1996. ftp://ftp.princeton.edu/pub/harnad/Harnad/HTML/harnad96.language.theft.html
The Adaptive Advantage of Symbolic Theft
Over Sensorimotor Toil: Grounding Language in Perceptual Categories
(with A. Cangelosi)
Paper presented at the Second International
Conference on the Evolution of Language, London, April 1998. To appear
in volume edited by C. Knight and J. Hurford. ftp://ftp.princeton.edu/pub/harnad/Harnad/HTML/harnad98.theft.toil.html
2. Literature on
the symbol grounding problem.
Christiansen, M.H. & Chater, N. (1992).
Connectionism, meaning and learning.
Connection Science 4: 227-252.
Cummins, Robert (1996). Why There Is No Symbol
Grounding Problem, Chapter 9 of: Representations, Targets, and Attitudes.
Dorffner, G., Prem, E. & Trost, H.
(1993) Words, Symbols, and Symbol Grounding,
Österreichisches Forschungsinstitut für Artificial Intelligence,
[available online at ftp://ftp.ai.univie.ac.at/papers/oefai-tr-93-30.ps.z]
Dyer, M. (1990). Intentionality and
computationalism: minds, machines, Searle and Harnad. Journal of Experimental
and Theoretical Artificial Intelligence 2: 303-19.
Dyer, M. (1990), Finding lost minds. Journal
of Experimental and Theoretical Artificial Intelligence 2: 329-39.
Fetzer, J.H. (1995). Minds and machines:
Behaviorism, dualism and beyond. In: S. Franchi and G. Güzeldere (eds.),
Constructions of the Mind: Artificial Intelligence and the Humanities.
Stanford Humanities Review 4(2).
Frixione, M. & Spinelli,
G. (1992) Connectionism and functionalism: the importance of being a subsymbolist.
Journal of Experimental and Theoretical Artificial Intelligence
Frixione, M. (1994).
Logica, significato e intelligenza artificiale (Logic, Meaning,
and Artificial Intelligence). Rome: Angeli.
Hauser, L. (1993). Reaping
the Whirlwind: Reply to Harnad's Other Bodies, Other Minds.
Minds and Machines 3 (2): 219-238.
Hayes, P., Harnad, S., Perlis, D. & Block,
N. (1992). Virtual Symposium on Virtual Mind.
Minds and Machines 2(3) 217-238.
Jackson, S.A. (1994). Grounding or association?
Connection Science 6: 120-122.
MacDorman, K. F. (1997). Symbol
grounding: Learning categorical and sensorimotor predictions for coordination
in autonomous robots. Technical Report No. 423. Computer Laboratory,
Cambridge University (e-mail: firstname.lastname@example.org).
MacDorman, K. F. (1997). How
to ground symbols adaptively. In: S. O' Nuillain, P. McKevitt,
E. MacAogain (eds.). Two Sciences of Mind.
MacDorman, K. F. (1998). Feature learning,
multiresolution analysis, and symbol grounding. A peer commentary
on Schyns, Goldstone, and Thibaut's 'The development of features in object
concepts'. Behavioral and Brain Sciences.
Marconi, D. (1997) Referentially competent
systems. In: Lexical Competence, chapter 6, MIT Press.
Marraffa, M. (forthcoming).
Ancoraggio dei simboli mediante reti neurali: un esame critico (Grounding
Symbols with Neural Nets: A Critical Overview). Il Cannocchiale,
Meini, C. & Paternoster, A. (1996). Understanding
language through vision. In P. McKevitt (ed.), Integration of Natural
Language and Vision, Dordrecht: Kluwer, vol. III.
Nenov, V.I. & Dyer, M. (1994). Perceptually
grounded language learning: Part 2 – DETE: A neural/procedural model. Connection
Science 6: 1-40.
Plunkett, K., Sinha, C., Moller, M.F.,
Strandsby, O. (1992). Symbol grounding or the emergence of symbols?
Vocabulary growth in children and a connectionist net. Connection
Science, 4(3-4), special issue: Philosophical issues in connectionist
modelling, A. Clark (ed.), pp. 293-312.
Prem, E. (1994). Symbol
grounding revisited. Österreichisches Forschungsinstitut
für Artificial Intelligence, Wien, TR-94-19.
[available online at ftp://ftp.ai.univie.ac.at/papers/oefai-tr-94-19.ps.z]
Prem, E. (1994). Symbol
grounding and transcendental logic. Österreichisches Forschungsinstitut
für Artificial Intelligence, Wien, TR-94-20.
[available online at ftp://ftp.ai.univie.ac.at/papers/oefai-tr-93-30.ps.z]
Prem, E. (1995). Dynamic symbol grounding
state construction and the problem of teleology. in Mira J. & Sandoval
F. (eds.), From Natural to Artificial Neural Computation, Proc. International
Workshop on Artificial Neural Networks, Malaga-Torremolinos, Spain, June.
Springer, LNCS 930.
Prem, E. (1995). Symbol grounding and transcendental
logic. In Niklasson L. & Boden M. (eds.), Current Trends in Connectionism,
Lawrence Erlbaum, Hillsdale, NJ, pp. 271-282.
Sales, N.J. and Evans, R.G. (1989).
An approach to solving the symbol grounding problem: neural networks for
object naming and retrieval. Proc. of CMC-95, Eindhoven.