Learners are able to infer the meanings of words by observing the consistent statistical association between words and their referents, but the nature of the learning mechanisms underlying this process are unknown. We conducted an artificial cross-situational word learning experiment in which either words consistently appeared with multiple objects (extra object condition) or objects consistently appeared with multiple words (extra word condition). In both conditions, participants learned one-to-one ("mutually exclusive") word-object mappings. We tested whether a number of computational models of word learning learned mutually exclusive lexicons. Simple associative models learned mutually exclusive lexicons in at most one of the two conditions. In contrast, a more complex Bayesian modelwhich assumed that only some objects were being talked about and only some words referredlearned mutually exclusive lexicons in both conditions, consistent with the performance of human learners.