Cues for different language learning tasks: Simultaneous or successive learning?

It has long been claimed that the child’s experience of language is not sufficient to enable them to learn language, and so language structure must be innate and internal to the child. However, this traditional view depends on considering the child’s experience of language only in terms of the sequences of words that children hear. Yet, the language environment is rich, multimodal, noisy, and stimulating, going far beyond mere sequences of words. In this work package we investigated which sources of information in the child’s environment are available to support learning to identify words, determine their meaning, and their grammatical role in sentences. We also explored the contemporary influence of new media in adapting children’s experience of this language environment.

Using a combination of computational modelling, corpus analysis of child-directed speech, experimental studies, and survey methods, we discovered that:

  • The arrangement of sounds in words, the distribution of words in speech, the gesture of caregivers, and the presence and absence of objects and events around children each contributed to promote early stages of language learning.
  • Combinations of cues were even more powerful in supporting learning than individual cues, and when these cues were variable, or noisy, this was optimal for language learning.
  • Children’s ability to identify words in artificial speech related to their vocabulary development in the first two years of life, and the cues they relied on to identify words were the same as those used to identify the grammatical role of words in the language.
  • At the point when language learners acquire their first words, they are already sensitive to the grammatical role of those words: vocabulary and grammar appear to be acquired simultaneously and early in language development.
  • Use of new media (e.g., smartphones) is substantial in children’s preschool years, but children’s early language development was best predicted by their time spent co-reading with their caregivers, rather than the time spent on new media devices.

Project Team: Padraic Monaghan (Lead), Morten Christiansen, Caroline Rowland, Gert Westermann, Rebecca Frost, Kirsty Dunn and Gemma Taylor.

Start Date: October 2014

Duration: 3 years

(Work Package 4)

 

Key Outputs

Monaghan, P., Schoetensack, C., & Rebuschat, P. (2019). A single paradigm for implicit and statistical learning. Topics in Cognitive Science, 11(3), 536–554. https://doi.org/10.1111/tops.12439

Frost, R., Monaghan, P., & Christiansen, M. H. (2019). Mark my words: High frequency marker words impact early stages of language learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45(10), 1883-1898. https://doi.org/10.1037/xlm0000683

Taylor, G., Monaghan, P., & Westermann, G. (2018). Investigating the association between children’s screen media exposure and vocabulary size in the UK. Journal of Children and Media, 12(1), 51-65. https://doi.org/10.1080/17482798.2017.1365737

Monaghan, P., & Rowland, C. F. (2017). Combining Language Corpora With Experimental and Computational Approaches for Language Acquisition Research. Language Learning, 67(Suppl. 1), 14-39. https://doi.org/10.1111/lang.12221

Frost, R. L. A., & Monaghan, P. (2016). Simultaneous segmentation and generalisation of non-adjacent dependencies from continuous speech. Cognition, 147, 70-74. https://doi.org/10.1016/j.cognition.2015.11.010

Monaghan, P. (2017). Canalization of language structure from environmental constraints: a computational model of word learning from multiple cues . Topics in Cognitive Science, 9(1), 21-34. https://doi.org/10.1111/tops.12239