Bayesian Word Learning in Multiple Languages

Abstract

Infant language learners are faced with the difficult inductive problem of determining how new words map to novel or known objects in their environment. Bayesian inference models have been successful at using the sparse information available in natural child-directed speech to build candidate lexicons and infer speakers' referential intentions. We begin by showing that when a Bayesian model is optimized for monolingual input (Frank et al., 2009), the model does not sufficiently handle bilingual input, especially as referential ambiguity increases. Here we propose an extended Bayesian model that approximates infants' mutual exclusivity bias to support the differential demands of monolingual and bilingual learning situations. The extended model is assessed using corpora of real child-directed speech, showing that performance can be optimized for both monolingual and bilingual contexts. We show that including both monolingual and bilingual demands in model optimization yields significantly different results than when only one context is considered. Frank, M. C., Goodman, N. D., & Tenenbaum, J. B. (2009). Using speakers’ referential intentions to model early cross-situational word learning. Psychological Science, 1-8.


Back to Table of Contents