Multiple natural language cues assist the processing of hierarchical structure. (2)

Tony Trotter, Rebecca Frost, and Padraic Monaghan presented this poster at the 22nd annual AMLaP Conference, Bilbao, ES in September 2016.

 

Abstract:

Hierarchical centre embeddings (HCEs) in natural language have been taken as evidence that it is not a finite state system. Whilst phrase structure may be necessary to produce these structures, their comprehension may be sequential in nature. To achieve this, surface level cues would be vital for dependency detection. Natural speech cues – e.g. pitch prosodic, rhythmic, and semantic similarity cues – are often neglected in artificial language research. If sequential processing does occur, their absence could explain the difficulty in learning artificial materials. In the present artificial grammar learning experiment, 80 participants were trained in one of five conditions – Baseline, phonological similarity between dependent elements, rhythmic cues, pitch prosodic cues, and a combined condition - assessing whether grouping cues facilitate the processing of artificial HCEs, generated with the AnBn rule. Participants were trained on an artificial language, then performed a grammatical classification task on novel materials. Early testing phases produced chance performance. The combined condition elicited the greatest accuracy in intermediate phases, before seeing a large decrement in the late testing phase, where the phonological cues condition produced the highest accuracy. Multiple information sources support the early learner’s performance. With more experience, explicit linguistic dependency cues become more reliable.