When do PDP neural networks learn localist representations?

Abstract

One of the most distinctive characteristics of the Parallel Distributed Processing (PDP) approach to cognitive modeling is that representations are distributed across a large set of units. However, this is not always the case. In a series of simulations we show that PDP neural networks tend to form localist representations under certain conditions. First, localist representations are developed when the mapping between the input and output patterns is arbitrary. A second pressure to learn localist codes comes from having to keep multiple representations active at the same time. Introducing biologically plausible constraints on the network architecture also fosters developing local codes. Taken together, these findings suggest that the widespread assumption that PDP neural networks learn distributed representations is often wrong. Moreover, exploring the computational reasons for which PDP learn localist representations provides insight into why selective neurons are often found in the brain.


Back to Table of Contents