Previous: Wider Significance Up: Wider Significance Next: Knowledge Representation and Semantics

Embodiment, Symbol Grounding, and Natural Language Semantics

The computational model of color perception and color naming described in this dissertation can be seen as a case study in embodiment, symbol grounding, and natural language semantics. The symbol grounding problem is about how to make the semantic interpretation of a formal symbol system intrinsic to the system, rather than just ``parasitic on the meanings in our heads'' [Harnad 1990][p. 335], i.e. only accessible to us, as designers or observers of the formal system, rather than to the formal system itself. The concept of symbol grounding is closely related to those of embodiment [Lakoff 1987] and situated action or cognition [Suchman 1988]. As mentioned before, the view of intelligence as being closely connected to the properties of the intelligent organism and the properties of the environment the organism operates in can be contrasted with a purely symbolic view, which holds that intelligence can be studied in the abstract, without reference to any organism (e.g., [Newell 1979]).

Since there is no clear definition available in the literature (Section ), at least not for my purpose, I define embodiment as the notion that the representation, manipulation, and semantics of high-level symbolic concepts is in part determined by the physiology (the bodily functions) of an agent and in part by the interaction of the agent with the world. For instance, the semantics of color concepts is in part determined by the physiology of the color perception mechanism, and in part by the visual stimuli this mechanism interacts with. The result is the establishment of a mapping between symbolic color concepts and analog representations that reflect some properties of both the color perception mechanism and objects in the world.

My color perception and naming model grounds the color terms from the codomain of in the perception of visual stimuli, the domain of , by connecting them via the mapping itself. In other words, the mapping constitutes a system-internal, referential semantic model of the color terms, or embodies the semantics of color terms for the agent that it is part of. Of course, such a model is an instance of situated cognition if we consider the color terms to be ``mental'' representations that are causally connected to the environment the robot operates in.

Note that one could simultaneously define a traditional external, model-theoretic semantic model of the color terms the robot is using, which might be more or less co-extensive with the internal semantic model. Or a robot-psychologist could study the robot's behavior on color-related tasks and try to infer a semantic model from that. What's important about the internal model is the following:

  1. Without it, the robot would not be able to perform any color-related tasks, since it could not perceive any colors at all.

  2. In the internal model, there is no ambiguity about which ``objects'' symbols are paired with, since the relation is a causal one. As such, the model is immune to criticism of logical models as models of meaning that is based on the indeterminacy of reference or the under-determining of meaning by truth conditions, e.g., [Putnam 1981].

  3. The pairing of terms with numerical ``goodness'' measures in the codomain of allows for both discrete, non-overlapping categories and graded, overlapping categories. To accurately model human color categorization, it is necessary to use the second kind of category [Kay \& McDaniel 1978][Berlin \& Kay 1969]. This property of the model reflects and makes explicit the difficulty involved in mapping a (for all practical purposes) continuous world onto a set of discrete symbols such as color terms. Subsequent processing in the symbolic domain may choose to ignore the overlapping and graded nature of the color categories by using only the term component of the pairs after applying a thresholding function to the numerical ``goodness'' component, but access to the ``goodness'' component is possible, if needed.

My thesis with respect to symbol grounding is a pragmatic one. I do not claim that, without grounding, a symbol system cannot truly ``understand'' the world, because that would require a consistent definition of ``understanding'', which is lacking to date. My thesis is that grounding, as I interpret it, enables a robotic agent to perform well on color-related tasks and that it provides a well-defined model of the agent's semantics of color terms that also adequately models the semantics of human color terms. The latter may be important for man-machine communication, as Winston has pointed out [Winston 1975][p.154].

lammens@cs.buffalo.edu