Chunking or not chunking? How do we find words in artificial language learning?


What is the nature of the representations acquired in implicit statistical learning? Recent results in the field of language learning have shown that adults and infants are able to find the words of an artificial language when exposed to a continuous auditory sequence consisting in a random ordering of these words. Such performance can only be based on processing the transitional probabilities between sequence elements. Two different kinds of mechanisms may account for these data: Participants either parse the sequence into smaller chunks corresponding to the words of the artificial language, or they become progressively sensitive to the actual values of the transitional probabilities. The two accounts are difficult to differentiate because they tend to make similar predictions in similar experimental settings. In this study, we present two experiments aimed at disentangling these two theories. In these experiments, participants had to learn two sets of pseudo-linguistic regularities (L1 and L2) presented in the context of a Serial Reaction Time (SRT) task. L1 and L2 were either unrelated, or the intra-words transitions of L1 became the inter-words transitions of L2. The two models make opposite predictions in these two situations. Our results indicate that the nature of the representations depends on the learning conditions. When cues are presented to facilitate parsing of the sequence, participants learned the words of the artificial language. However, when no cues were provided, their performance was strongly influenced by the actual values of the transitional probabilities.

Back to Saturday Posters