Previous: Semantics as vaporware Up: Knowledge Representation and Semantics
There is a more fundamental problem with Tarski-style semantics, with regard to its potential use for KRR systems. Given the discrete symbolic nature of logic, Tarski-style semantic models have always presupposed an equally discrete domain of interpretation, consisting of individuals that are the images of individual constants of the language under the interpretation function, sets of such individuals corresponding to predicates of the language, etc. It does not matter whether the domain of interpretation is taken to be the real world, a possible world, objects of thought, or something else; the discreteness assumption is universal. As I discussed in Section , the categories of human perception (and, by extension, of human cognition, since I firmly believe that perception is one of the foundations of cognition) are not discrete. This misfit between logical categories and perceptual categories is one of the basic problems in trying to use logic-inspired KRR formalisms as agent-level mechanisms, I believe (see also [Lakoff 1987], among others). If we want to make semantic models part of the agent's machinery, as described above, they will have to take the non-discrete nature of natural categories into account. We find support for this point of view in a perhaps unexpected place, viz. a textbook on mathematical logic:
Most natural and artificial languages are characteristically discrete and linear (one-dimensional). On the other hand, our perception of the external world is not felt by us to be either discrete or linear, although these characteristics are observed on the level of physiological mechanisms (coding by impulses in the nervous system). [ The human brain clearly uses both principles. The perception of images as a whole, along with emotions, are more closely connected with nonlinear and non-discrete processes - perhaps of a wave nature. [Manin 1977][p. 18]
My dissertation work is a case study of how to map a continuous world onto a set of discrete symbols, viz., basic color terms. The essential characteristics of the model in this respect are that it contains an analog representation which functions as a perceptual or psychological space (cf. [Shepard 1987]) causally connected to the outside world via sensors, and a mechanism that relates regions in this space to individual terms and associated ``typicality'' or ``goodness'' values (Section ). The regions representing the extensions of the terms may overlap or not, and the characteristic functions associated with the terms (if we conceive of them as something like fuzzy sets) are continuous-valued. This, I believe, represents a much more realistic attempt to characterize the semantics of a set of terms than any discrete model can hope to achieve. As logicians, we may not like the inherent fuzziness or ``scruffiness'' of a model with continuous numerical components, but as students of cognition we are forced to accept and deal with it.