Cues for different language learning tasks: Simultaneous or successive learning?
The claim that the environment is too impoverished to guide language learning is typically based on analyses of a single source of information in isolation. However, language is not learned in a vacuum, and there are multiple cues in the environment that, together, constrain the child’s hypotheses about language structure. These cues have been shown to apply for learning to segment speech into individual words, or learning the grammatical categories within the language (such as which words are nouns and which are verbs), or in helping to link words to their meanings in the world. However, these tasks have been treated separately, even though there is substantial overlap in the sorts of cues that are proposed to be important for learning segmentation, or categorisation, or word-meaning mappings.
In this study, we will investigate which cues are useful for learning each of these tasks, and we will probe how early these cues become available for each task. For instance, our computational modelling work has shown that very high frequency words that occur in speech (such as you, or the) can help to divide up speech into individual words, but we also know that these words are useful for indicating the grammatical category of the words that they surround (verbs tend to come after you and nouns tend to come after the). So, does this high-frequency word cue help both to segment and to categorise, at the same early stage in language learning, or are they applied successively for different language learning tasks?
This project will also investigate the extent to which the sort of learning required for segmentation, categorisation, and word-meaning mappings is language-specific or draws on general-purpose learning mechanisms. We will investigate this by comparing language learning to learning from musical or visual sequences.
Start Date: October 2014
Duration: 3 years
(Work Package 4)