Biologically Plausible, Human-scale Knowledge Representation

Abstract

Several approaches to implementing symbol-like represen- tations in neurally plausible models have been proposed. These approaches include binding through synchrony (Shas- tri & Ajjanagadde, 1993), mesh binding (van Der Velde & de Kamps, 2006), and conjunctive binding (Smolensky, 1990; Plate, 2003). Recent theoretical work has suggested that most of these methods will not scale well – that is, they cannot en- code structured representations that use any of the tens of thou- sands of terms in the adult lexicon without making implausible resource assumptions (Stewart & Eliasmith, 2011; Eliasmith, in press). Here we present an approach that will scale appro- priately, and which is based on neurally implementing a type of Vector Symbolic Architecture (VSA). Specifically, we con- struct a spiking neural network composed of about 2.5 million neurons that employs a VSA to encode and decode the main lexical relations in WordNet, a semantic network containing over 100,000 concepts (Fellbaum, 1998). We experimentally demonstrate the capabilities of our model by measuring its per- formance on three tasks which test its ability to accurately tra- verse the WordNet hierarchy, as well as to decode sentences employing any WordNet term while preserving the original lexical structure. We argue that these results show that our approach is uniquely well-suited to providing a biologically plausible, human-scale account of the structured representa- tions that underwrite cognition.


Back to Table of Contents