Previous: Color Naming Up: Problem Definition

Outline of the Model

I will define a computational model of human color perception and color naming, i.e. construct an algorithmic mapping as defined in Section , which is based in part on existing data about the neurophysiology and psychophysics of color perception, and which can explain existing anthropological and linguistic data on color naming as well. The model should allow an artificial cognitive agent (e.g. a GLAIR-agent [Shapiro \& Rapaport 1987][Hexmoor et al. 1993b][Hexmoor et al. 1993c][Lammens et al. 1994]), when equipped with the necessary sensors and actuators, to

  1. Name colors in response to a visual stimulus, and provide a confidence or ``goodness of example'' rating of its judgment; this requires evaluation of and some kind of thresholding on the resulting pairs .

  2. Point out examples of named colors in its environment, and provide a confidence rating, and as a derivative of this capability, pick the best example of a named color from a set of color samples, or from its environment in general; this requires evaluation of over the whole visual field, and some kind of maximization on the resulting pairs .
The performance on these tasks must be consistent with human performance on the same tasks, as described in Chapter . In particular, this requires that
  1. The model place the foci of basic color categories, as described in [Berlin \& Kay 1969] and elsewhere, in the same regions of the color space as human subjects do

  2. The model place the boundaries of basic color categories in the same regions of the color space (cf. Chapter ) as human subjects do.
A more precise definition of ``same region in the color space'' will be given, to determine success in this area.

The model deals only to a limited extent with a number of important issues in color vision, most notably the effects of surrounds on perceived color, or in general the relation between spatial and color vision, and color constancy. These issues are dealt with only in as far as they are relevant to color naming.

The model has been integrated into a vision system capable of interacting with its environment as described above. I will refer to this system as the Color Labeling Robot (CLR), after the Color Reader Robot described as a thought experiment in [Hausser 1989]. The model has also been tested on simulated data, and the results compared to known data about human color naming.

A system that exhibits the behavior described above is an example of a (partly) embodied [Kay \& McDaniel 1978][Lakoff 1987] or grounded [Harnad 1990] system, or it can be seen as an instance of situated cognition [Suchman 1988].

lammens@cs.buffalo.edu