Login / Signup

Imagined speech event detection from electrocorticography and its transfer between speech modes and subjects.

Aurélie de BormanBenjamin WittevrongelIne DauweEvelien CarretteAlfred MeursDirk Van RoostPaul BoonMarc M Van Hulle
Published in: Communications biology (2024)
Speech brain-computer interfaces aim to support communication-impaired patients by translating neural signals into speech. While impressive progress was achieved in decoding performed, perceived and attempted speech, imagined speech remains elusive, mainly due to the absence of behavioral output. Nevertheless, imagined speech is advantageous since it does not depend on any articulator movements that might become impaired or even lost throughout the stages of a neurodegenerative disease. In this study, we analyzed electrocortigraphy data recorded from 16 participants in response to 3 speech modes: performed, perceived (listening), and imagined speech. We used a linear model to detect speech events and examined the contributions of each frequency band, from delta to high gamma, given the speech mode and electrode location. For imagined speech detection, we observed a strong contribution of gamma bands in the motor cortex, whereas lower frequencies were more prominent in the temporal lobe, in particular of the left hemisphere. Based on the similarities in frequency patterns, we were able to transfer models between speech modes and participants with similar electrode locations.
Keyphrases
  • hearing loss
  • social support
  • machine learning
  • newly diagnosed
  • multiple sclerosis
  • deep learning
  • white matter
  • functional connectivity
  • data analysis
  • patient reported
  • solid state