Flexible Spatial Language Behaviors: Developing a New Dynamic Theoretical Framework

Abstract

Spatial communication is incredibly flexible, but implemented spatial language models to date have focused on a limited set of behaviors. The present work takes a step towards addressing this limit by implementing a new spatial language model grounded in the principles of neural population dynamics. To rigorously test our model, we implement it on a robotic platform continuously linked to real-world visual images. Our model extracts the categorical, cognitive information from the low-level sensory input through the system dynamics, thus permitting the dynamic integration of visual space, spatial language, and color within a single, unified theoretical framework. In a series of demonstrations then we show how this model autonomously generates a range of spatial language behaviors such as generating a spatial description, identifying the color of an object at a described location, and dynamically structuring camera movements according to internal decision dynamics.


Back to Saturday Posters