As adults we say around 10,000-15,000 words a day and hear and understand many more. It’s hard to remember a time when we couldn’t understand our native language, or communicate our thoughts and needs to other people with words. Yet, that is precisely the position of a newborn baby.
The newborn enters the world unable to utter a single word but within a few short years they will be speaking like an adult. By the time they reach their first birthday they have already started to understand some of what people are saying around them, and may even be producing a few words themselves. Amazingly, by their second birthday they know enough words that they can begin to combine them into sentences. And finally, by the time they reach four or five years of age, they can hold conversations with other children and adults. To succeed in this language learning journey, babies and children need to do a lot.
- They need to find the words in the speech they hear. Unlike written language, when we speak the pauses do not always appear between words. This means that findingwherewordsbeginandendcanbetricky.
- Once babies have found the words, they need to figure out what they mean. They need to match the word ‘dog’ to the large fluffy animal in their environment, or the word ‘kick’ to the action of kicking a ball. But they also need to learn the limits on words' meanings; for example, that “dog” refers to all “dogs”, even dogs in pictures or on the TV, but never to “cats”, despite the fact that cats look quite similar to dogs.
- The next step is joining those words together to make sentences. To do this they need to learn the rules (or grammar) of their native language(s). The rules that we all use when we put our sentences together mean that we can understand what other speakers of our language are telling us. Babies and children have to learn that the sentences “the dog bit the man” and “the man bit the dog” have very different meanings, even though they contain exactly the same words.
- Finally, they need to understand the hidden meaning behind the sentences they hear (called inference making). What we say is often not what we mean; for example, when we make jokes or use irony or sarcasm (“oh that’s just brilliant” in response to dropping a cup of coffee on the floor) or sometimes when we just express desires (consider the meaning of the phrase “I feel like a pizza” – no one who says this actually feels like they *are* a pizza). Children have to discover what we really mean when we use language.
Researchers in the universities that make up the LuCiD centre are interested in understanding more about these steps in language learning. Using a range of techniques, we are able to find out what babies know about language, even before they can tell us. For example, babies look at or listen to new and familiar things differently. We can use these preferences to tell us what they have learned about language. LuCiD researchers at all three of our universities took part in a global study that used this method to find out whether babies really do like listening to baby-talk. Baby-talk is a particular way of speaking that adults tend to use when talking to babies; it has a higher pitch, uses more varied intonation, and is slower and repetitive. Some researchers have suggested that baby-talk is useful because babies prefer to listen to it, and are thus more likely to learn from it than normal speech. In this study, the researchers found that this was, indeed, the case1. Most of the babies that took part in this study listened longer to baby-talk than speech to an adult, suggesting that baby-talk is a useful tool to use to help your baby take their first steps into learning language.
A lot of the studies we run with very young children use eye trackers. With this exciting technology we can see, in real-time, where babies and children look when they hear different words. This tells us lots about what babies know about individual words and how they understand sentences as they unfold. Research studies have discovered that when 2 year olds hear the word ‘eat’ they are more likely to look at a picture of a cake than a picture of a bird2. This means that, at 2 years old, they can already predict what the rest of the sentence will be before they hear the words. This is really useful for conversations because speech is so fast that it can be difficult to process every single word as it is heard. Being able to make predictions about upcoming words, so that you don’t have to wait to hear it, to understand the sentence, really helps children, and adults, understand what is said to them.
With older children, who can talk to us, we play fun games to find out how they learn to put words together into grammatical sentences. For example, how do children learn that some sentences are grammatical (e.g. I unzipped the coat) but others (e.g. I unsqueezed the ball) are not? We know that parents rarely, if ever, correct children’s grammatical errors, so they’re not learning from being corrected. Ben Ambridge, a LuCiD researcher, and his colleagues have discovered that children as young as five have learned “hidden” rules (ones even adults are unaware of) about which types of actions can and can’t be reversed by adding un-. Actions that can be reversed with un- are to do with attaching, hand movements or circular motion, which is why you can “unzip”, “unbutton “or “unscrew” but can’t “unstand”, “uncome” or “ungo”. Children can’t explain this rule of course, but when we ask them which un- forms you can and can’t say, they clearly know it.
As you can see we have already discovered a lot about how babies and children learn language but the more we learn the more questions we have.
1. The ManyBabies Consortium. (2019). Quantifying sources of variability in infancy research using the infant-directed speech preference. Registered report with in principle acceptance. Advances in Methods and Practices in Psychological Science (AMPPS).
2. Mani, N. & Huettig, F. (2012) Prediction during language processing is a piece of cake – but only for skilled producers. Journal of Experimental Psychology: Human Perception and Performance 38: 843–47