From lexical to abstract knowledge: The case of wh-questions
The aim of this project was to investigate the balance between lexically-specific and abstract syntactic knowledge, and, in particular, how the latter develops from the former. In order to do so, we focussed on the well-studied and relatively circumscribed domain of English wh-questions; a structure for which children frequently make errors (previous studies have observed error rates of 50% and above, for certain question types).
The first part of the project focussed on a particular type of error that children very commonly make when producing questions; for example, saying “*Why he can’t have one?” instead of “Why can’t he have one?” – that is, switching the subject (e.g., “he”) and the auxiliary verb (e.g., “can’t”). Previous research, including some of our own (Ambridge et al., 2006; Rowland, 2007) suggested that these errors were caused mainly by the lack of the wh-word+subject combinations (e.g., Why+can’t) in the input. Our LuCiD research found that, in fact, this is only a relatively small part of the story. A much more important factor is the frequency in the input of the errorful “chunk”; here ”he can’t”. In other words, the real reason children say things like “*Why he can’t have one?” instead of “Why can’t he have one?” is just that they’ve heard he+can’t (e.g,. “He can’t do this”, “He can’t do that”, “He can’t go there”…) much more often than can’t+he (which, of course, only turns up in questions). We are currently in the process of writing up this study for publication in a journal, and have presented it already at several conferences.
The second part of the project investigated how children produce complex questions like Is the crocodile who’s hot eating? Specifically, we tested the possibility that children could use the input to learn slot-and-frame patterns for (a) complex noun phrases (the [THING] who’s [PROPERTY]) and (b) simple questions (Is [THING] [ACTION]ing?), and then combine these patterns to end-up with a slot-and-frame pattern for complex questions (Is [the [THING] who’s [PROPERTY]] ACTIONing?). 122 four-to-six year old children were trained on simple questions (e.g., Is the bird cleaning?) and either (an Experimental group) complex noun phrases (e.g., the bird who’s sad) or (a Control group) matched simple noun phrases (e.g., the sad bird). These types of training studies are notoriously difficult, and the results were unfortunately rather inconclusive. In general, the Experimental and Control groups didn’t differ on their ability to produce complex questions (with new characters and actions) in the test phase. That said, the Experimental group did show a greater ability than the Control group to produce complex questions on the first test trial, suggesting maybe the picture was complicated by learning happening during the test session. The Experimental group also showed some evidence of generalizing the particular slot and frame pattern they were taught from training to test (i.e., they were more likely than the Control group to say the [THING] who’s [PROPERTY] as opposed to the [THING] that’s [PROPERTY]). The Experimental group also showed a lower rate of auxiliary-doubling errors (e.g., *Is the crocodile who’s hot is eating?), which are the most common type of error for these complex sentences. A write up of this study is currently under review for publication in the journal Language and Cognition.
Start Date: September 2016
Duration: 3 years
(Work Package 7)
McCauley, S.M., Bannard, C., Theakston, A., Davis, M., Cameron-Faulkner, T., & Ambridge, B. (2019). Multiword units predict non-inversion errors in children’s wh-questions: “What corpus data can tell us?” In A. Goel, C. Seifert, & C. Freksa (Eds.) Proceedings of the 41st Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society.