Login / Signup

Asymmetric and adaptive reward coding via normalized reinforcement learning.

Kenway Louie
Published in: PLoS computational biology (2022)
Learning is widely modeled in psychology, neuroscience, and computer science by prediction error-guided reinforcement learning (RL) algorithms. While standard RL assumes linear reward functions, reward-related neural activity is a saturating, nonlinear function of reward; however, the computational and behavioral implications of nonlinear RL are unknown. Here, we show that nonlinear RL incorporating the canonical divisive normalization computation introduces an intrinsic and tunable asymmetry in prediction error coding. At the behavioral level, this asymmetry explains empirical variability in risk preferences typically attributed to asymmetric learning rates. At the neural level, diversity in asymmetries provides a computational mechanism for recently proposed theories of distributional RL, allowing the brain to learn the full probability distribution of future rewards. This behavioral and computational flexibility argues for an incorporation of biologically valid value functions in computational models of learning and decision-making.
Keyphrases
  • decision making
  • machine learning
  • prefrontal cortex
  • white matter
  • multiple sclerosis
  • brain injury
  • resting state
  • blood brain barrier
  • drug induced
  • solid state