Login / Signup

Driving and suppressing the human language network using large language models.

Greta TuckuteAalok SatheShashank SrikantMaya TaliaferroMingye WangMartin SchrimpfKendrick N KayEvelina Fedorenko
Published in: bioRxiv : the preprint server for biology (2023)
Transformer language models are today's most accurate models of language processing in the brain. Here, using fMRI-measured brain responses to 1,000 diverse sentences, we develop a GPT-based encoding model to identify new sentences that are predicted to drive or suppress responses in the human language network. We demonstrate that these model-selected 'out-of-distribution' sentences indeed drive and suppress activity of human language areas in new individuals (85.7% increase and 97.5% decrease relative to the diverse naturalistic sentences). A systematic analysis of the model-selected sentences reveals that surprisal and well-formedness of linguistic input are key determinants of response strength in the language network. These results establish the ability of accurate models of the brain to noninvasively control neural activity in higher-level cortical areas, like the language network.
Keyphrases
  • autism spectrum disorder
  • endothelial cells
  • resting state
  • functional connectivity
  • high resolution
  • white matter
  • induced pluripotent stem cells
  • multiple sclerosis