Login / Signup

Dopamine transients do not act as model-free prediction errors during associative learning.

Melissa J SharpeHannah M BatchelorLauren E MuellerChun Yun ChangEtienne J P MaesYael NivGeoffrey Schoenbaum
Published in: Nature communications (2020)
Dopamine neurons are proposed to signal the reward prediction error in model-free reinforcement learning algorithms. This term represents the unpredicted or 'excess' value of the rewarding event, value that is then added to the intrinsic value of any antecedent cues, contexts or events. To support this proposal, proponents cite evidence that artificially-induced dopamine transients cause lasting changes in behavior. Yet these studies do not generally assess learning under conditions where an endogenous prediction error would occur. Here, to address this, we conducted three experiments where we optogenetically activated dopamine neurons while rats were learning associative relationships, both with and without reward. In each experiment, the antecedent cues failed to acquire value and instead entered into associations with the later events, whether valueless cues or valued rewards. These results show that in learning situations appropriate for the appearance of a prediction error, dopamine transients support associative, rather than model-free, learning.
Keyphrases
  • uric acid
  • prefrontal cortex
  • spinal cord
  • machine learning
  • emergency department
  • preterm infants
  • endothelial cells
  • patient safety
  • deep learning
  • high glucose
  • drug induced
  • electronic health record