Login / Signup

The successor representation in human reinforcement learning.

Ida MomennejadE M RussekJ H CheongMatthew M BotvinickN D DawSamuel J Gershman
Published in: Nature human behaviour (2017)
Theories of reward learning in neuroscience have focused on two families of algorithms thought to capture deliberative versus habitual choice. 'Model-based' algorithms compute the value of candidate actions from scratch, whereas 'model-free' algorithms make choice more efficient but less flexible by storing pre-computed action values. We examine an intermediate algorithmic family, the successor representation, which balances flexibility and efficiency by storing partially computed action values: predictions about future events. These pre-computation strategies differ in how they update their choices following changes in a task. The successor representation's reliance on stored predictions about future states predicts a unique signature of insensitivity to changes in the task's sequence of events, but flexible adjustment following changes to rewards. We provide evidence for such differential sensitivity in two behavioural studies with humans. These results suggest that the successor representation is a computational substrate for semi-flexible choice in humans, introducing a subtler, more cognitive notion of habit.
Keyphrases
  • machine learning
  • deep learning
  • current status
  • endothelial cells
  • decision making
  • diffusion weighted imaging
  • magnetic resonance imaging
  • ionic liquid
  • prefrontal cortex