Speech and music recruit frequency-specific distributed and overlapping cortical networks.
Noémie Te RietmolenManuel R MercierAgnès TrébuchonBenjamin MorillonDaniele SchönPublished in: eLife (2024)
To what extent does speech and music processing rely on domain-specific and domain-general neural networks? Using whole-brain intracranial EEG recordings in 18 epilepsy patients listening to natural, continuous speech or music, we investigated the presence of frequency-specific and network-level brain activity. We combined it with a statistical approach in which a clear operational distinction is made between shared , preferred, and domain- selective neural responses. We show that the majority of focal and network-level neural activity is shared between speech and music processing. Our data also reveal an absence of anatomical regional selectivity. Instead, domain-selective neural responses are restricted to distributed and frequency-specific coherent oscillations, typical of spectral fingerprints. Our work highlights the importance of considering natural stimuli and brain dynamics in their full complexity to map cognitive and brain functions.
Keyphrases
- resting state
- neural network
- white matter
- functional connectivity
- end stage renal disease
- ejection fraction
- hearing loss
- chronic kidney disease
- cerebral ischemia
- prognostic factors
- multiple sclerosis
- gene expression
- computed tomography
- optical coherence tomography
- high resolution
- single cell
- artificial intelligence
- high density
- network analysis