Login / Signup

Natural language instructions induce compositional generalization in networks of neurons.

Reidar RivelandAlexandre Pouget
Published in: Nature neuroscience (2024)
A fundamental human cognitive feat is to interpret linguistic instructions in order to perform novel tasks without explicit task experience. Yet, the neural computations that might be used to accomplish this remain poorly understood. We use advances in natural language processing to create a neural model of generalization based on linguistic instructions. Models are trained on a set of common psychophysical tasks, and receive instructions embedded by a pretrained language model. Our best models can perform a previously unseen task with an average performance of 83% correct based solely on linguistic instructions (that is, zero-shot learning). We found that language scaffolds sensorimotor representations such that activity for interrelated tasks shares a common geometry with the semantic representations of instructions, allowing language to cue the proper composition of practiced skills in unseen settings. We show how this model generates a linguistic description of a novel task it has identified using only motor feedback, which can subsequently guide a partner model to perform the task. Our models offer several experimentally testable predictions outlining how linguistic information must be represented to facilitate flexible and general cognition in the human brain.
Keyphrases
  • working memory
  • autism spectrum disorder
  • endothelial cells
  • healthcare
  • multiple sclerosis
  • hepatitis c virus
  • mild cognitive impairment
  • social media
  • white matter
  • hiv infected