A novel cognitive theory of semantics that proposes that the meanings of words can be described in terms of geometric structures.
In The Geometry of Meaning, Peter Gardenfors proposes a theory of semantics that bridges cognitive science and linguistics and shows how theories of cognitive processes, in particular concept formation, can be exploited in a general semantic model. He argues that our minds organize the information involved in communicative acts in a format that can be modeled in geometric or topological terms -- in what he terms conceptual spaces, extending the theory he presented in an earlier book by that name.
Many semantic theories consider the meanings of words as relatively stable and independent of the communicative context. Gardenfors focuses instead on how various forms of communication establish a system of meanings that becomes shared between interlocutors. He argues that these "meetings of mind" depend on the underlying geometric structures, and that these structures facilitate language learning. Turning to lexical semantics, Gardenfors argues that a unified theory of word meaning can be developed by using conceptual spaces. He shows that the meaning of different word classes can be given a cognitive grounding, and offers semantic analyses of nouns, adjectives, verbs, and prepositions. He also presents models of how the meanings of words are composed to form new meanings and of the basic semantic role of sentences. Finally, he considers the future implications of his theory for robot semantics and the Semantic Web.
This is a theory about how the mind structures concepts. He suggests that concepts consist of convex regions in perceptual dimensions. My own most recent paper suggests something similar. The main difference between our approaches is that while I am using dimensions which are more-or-less random, his dimensions are meaningful and interpretable, like color space. This may seem like a big advantage for him, but in reality it restricts the examples he can experiment with to toy problems. As I've come to understand high-dimensional semantic spaces better, I've come to realize that choosing a basis in which to work is something that depends on the particular problem you are trying to solve, and is probably not best set in stone from the beginning. The chapters about actions, events, and prepositions are something I haven't considered very deeply so gave me lots of fresh ideas.
It’s fascinating how Gardenfors intuit the embedding space of semantics, something that a few years later we figured out computationally using language models, but boy he is a bad writer, it’s really hard to follow him, the arguments are loose and he constantly refers to “next chapter” for more evidence