Alignment of brain embeddings and artificial contextual embeddings in natural language points to common geometric patterns.
Ariel GoldsteinAvigail Grinstein-DabushMariano SchainHaocheng WangZhuoqiao HongBobbi AubreyMariano SchainSamuel A NastaseZaid ZadaEric HamAmir FederHarshvardhan GazulaEliav BuchnikWerner DoyleSasha DevorePatricia DuganRoi ReichartDaniel FriedmanMichael BrennerAvinatan HassidimOrrin DevinskyAdeen FlinkerUri HassonPublished in: Nature communications (2024)
Contextual embeddings, derived from deep language models (DLMs), provide a continuous vectorial representation of language. This embedding space differs fundamentally from the symbolic representations posited by traditional psycholinguistics. We hypothesize that language areas in the human brain, similar to DLMs, rely on a continuous embedding space to represent language. To test this hypothesis, we densely record the neural activity patterns in the inferior frontal gyrus (IFG) of three participants using dense intracranial arrays while they listened to a 30-minute podcast. From these fine-grained spatiotemporal neural recordings, we derive a continuous vectorial representation for each word (i.e., a brain embedding) in each patient. Using stringent zero-shot mapping we demonstrate that brain embeddings in the IFG and the DLM contextual embedding space have common geometric patterns. The common geometric patterns allow us to predict the brain embedding in IFG of a given left-out word based solely on its geometrical relationship to other non-overlapping words in the podcast. Furthermore, we show that contextual embeddings capture the geometry of IFG embeddings better than static word embeddings. The continuous brain embedding space exposes a vector-based neural code for natural language processing in the human brain.