Learning nonadjacent dependencies in thought, language, and action: Not so hard after all.

Abstract

Learning to represent hierarchical structure and its nonadjacent dependencies (NDs) is thought to be difficult. I present three simulations of ND learning using a simple recurrent network (SRN). In Simulation 1, I show that the model can learn distance-invariant representations of nonadjacent dependencies. In Simulation 2, I show that purely localist SRNs can learn abstract rule-like relationships. In Simulation 3, I show that SRNs exhibit facilitated learning when there are correlated perceptual and semantic cues to the structure (just as people do). Together, these simulations show that (contrary to previous claims) SRNs are capable of learning abstract and rule-like nonadjacent dependencies, and show critical perceptual- and semantics-syntax interactions during learning. The studies refute the claim that neural networks and other associative models are fundamentally incapable of representing hierarchical structure, and show how recurrent networks can provide insight about principles underlying human learning and the representation of hierarchical structure.


Back to Table of Contents