Login / Signup

Congruent Visual Speech Enhances Cortical Entrainment to Continuous Auditory Speech in Noise-Free Conditions.

Michael J CrosseJohn B ButlerEdmund C Lalor
Published in: The Journal of neuroscience : the official journal of the Society for Neuroscience (2016)
Seeing a speaker's face as he or she talks can greatly help in understanding what the speaker is saying. This is because the speaker's facial movements relay information about what the speaker is saying, but also, importantly, when the speaker is saying it. Studying how the brain uses this timing relationship to combine information from continuous auditory and visual speech has traditionally been methodologically difficult. Here we introduce a new approach for doing this using relatively inexpensive and noninvasive scalp recordings. Specifically, we show that the brain's representation of auditory speech is enhanced when the accompanying visual speech signal shares the same timing. Furthermore, we show that this enhancement is most pronounced at a time scale that corresponds to mean syllable length.
Keyphrases
  • hearing loss
  • working memory
  • white matter
  • resting state
  • health information
  • air pollution
  • multiple sclerosis
  • functional connectivity
  • social media
  • brain injury