Speech and music recruit frequency-specific distributed and overlapping cortical networks.
Noémie Te RietmolenManuel R MercierAgnès TrébuchonBenjamin MorillonDaniele SchönPublished in: eLife (2024)
To what extent does speech and music processing rely on domain-specific and domain-general neural networks? Using whole-brain intracranial EEG recordings in 18 epilepsy patients listening to natural, continuous speech or music, we investigated the presence of frequency-specific and network-level brain activity. We combined it with a statistical approach in which a clear operational distinction is made between shared , preferred, and domain- selective neural responses. We show that the majority of focal and network-level neural activity is shared between speech and music processing. Our data also reveal an absence of anatomical regional selectivity. Instead, domain-selective neural responses are restricted to distributed and frequency-specific coherent oscillations, typical of spectral fingerprints. Our work highlights the importance of considering natural stimuli and brain dynamics in their full complexity to map cognitive and brain functions.
Keyphrases
- resting state
- neural network
- functional connectivity
- white matter
- end stage renal disease
- chronic kidney disease
- hearing loss
- working memory
- ejection fraction
- newly diagnosed
- cerebral ischemia
- magnetic resonance imaging
- multiple sclerosis
- optical coherence tomography
- electronic health record
- genome wide
- big data
- machine learning
- dna methylation
- high resolution
- peritoneal dialysis
- single cell
- subarachnoid hemorrhage
- data analysis
- single molecule