Login / Signup

Using automated acoustic analysis to explore the link between planning and articulation in second language speech production.

Matthew GoldrickYosi ShremOriana Kilbourn-CeronCristina BausJoseph Keshet
Published in: Language, cognition and neuroscience (2020)
Speakers learning a second language show systematic differences from native speakers in the retrieval, planning, and articulation of speech. A key challenge in examining the interrelationship between these differences at various stages of production is the need for manual annotation of fine-grained properties of speech. We introduce a new method for automatically analyzing voice onset time (VOT), a key phonetic feature indexing differences in sound systems cross-linguistically. In contrast to previous approaches, our method allows reliable measurement of prevoicing, a dimension of VOT variation used by many languages. Analysis of VOTs, word durations, and reaction times from German-speaking learners of Spanish (Baus et al., 2013) suggest that while there are links between the factors impacting planning and articulation, these two processes also exhibit some degree of independence. We discuss the implications of these findings for theories of speech production and future research in bilingual language processing.
Keyphrases
  • autism spectrum disorder
  • machine learning
  • hearing loss
  • deep learning
  • magnetic resonance
  • molecular dynamics
  • air pollution
  • magnetic resonance imaging
  • computed tomography
  • rna seq
  • single cell