Be-Bop-A-Lula: A CHREST model of infant word segmentation.
Martyn Lloyd-Kelly, Fernand Gobet, and Peter Lane
Poster presented at the Fifth Implicit Learning Seminar, Lancaster University. June 2015
Abstract
In a well-known study, Saffran et al. (1996) used a headturn preference procedure to show that 8 month old infants can discriminate between trisyllable nonsense words, e.g. "bidaku" after a 2-minute training phase. Words heard during the training phase are "familiar"; "novel" words are those not heard in the training phase (non-words) or words constructed from combinations of familiar word syllables (part-words). The data indicated that infants are sensitive to forward transitional probabilities. Several computational models have simulated aspects of this data. These include simple recurrent networks (Elman, 1991; French, Addyman, & Mareschal, 2011), connectionist autoassociators (French et al., 2011), Kohonen networks (Anderson, 1999), and PARSER (Perruchet & Vinter, 1998), a symbolic model. In these models, transition probabilities are approximated by learning mechanisms based on connectionist algorithms or the creation of chunks. Infants' ability to discriminate between words, non-words and part-words in Saffran et al.'s study are simulated by the models mentioned using various metrics. However, to our knowledge, no model has replicated the absolute times recorded. Thus, we provide a quantitative explanation of word