Login / Signup

Reward and fictive prediction error signals in ventral striatum: asymmetry between factual and counterfactual processing.

Aniol Santo-AnglesP Fuentes-ClaramonteI Argila-PlazaM Guardiola-RipollC Almodóvar-PayáJ MunueraP J McKennaE Pomarol-ClotetJ Radua
Published in: Brain structure & function (2021)
Reward prediction error, the difference between the expected and obtained reward, is known to act as a reinforcement learning neural signal. In the current study, we propose a model fitting approach that combines behavioral and neural data to fit computational models of reinforcement learning. Briefly, we penalized subject-specific fitted parameters that moved away too far from the group median, except when that deviation led to an improvement in the model's fit to neural responses. By means of a probabilistic monetary learning task and fMRI, we compared our approach with standard model fitting methods. Q-learning outperformed actor-critic at both behavioral and neural level, although the inclusion of neuroimaging data into model fitting improved the fit of actor-critic models. We observed both action-value and state-value prediction error signals in the striatum, while standard model fitting approaches failed to capture state-value signals. Finally, left ventral striatum correlated with reward prediction error while right ventral striatum with fictive prediction error, suggesting a functional hemispheric asymmetry regarding prediction-error driven learning.
Keyphrases
  • prefrontal cortex
  • spinal cord
  • deep brain stimulation
  • deep learning
  • functional connectivity
  • data analysis