Adaptive tuning of human learning and choice variability to unexpected uncertainty.
Junseok K LeeMarion RouaultValentin WyartPublished in: Science advances (2023)
Human value-based decisions are notably variable under uncertainty. This variability is known to arise from two distinct sources: variable choices aimed at exploring available options and imprecise learning of option values due to limited cognitive resources. However, whether these two sources of decision variability are tuned to their specific costs and benefits remains unclear. To address this question, we compared the effects of expected and unexpected uncertainty on decision-making in the same reinforcement learning task. Across two large behavioral datasets, we found that humans choose more variably between options but simultaneously learn less imprecisely their values in response to unexpected uncertainty. Using simulations of learning agents, we demonstrate that these opposite adjustments reflect adaptive tuning of exploration and learning precision to the structure of uncertainty. Together, these findings indicate that humans regulate not only how much they explore uncertain options but also how precisely they learn the values of these options.