
Faculty, Staff and Student Publications
Publication Date
7-12-2024
Journal
Nature Communications
Abstract
The dominant theoretical framework to account for reinforcement learning in the brain is temporal difference learning (TD) learning, whereby certain units signal reward prediction errors (RPE). The TD algorithm has been traditionally mapped onto the dopaminergic system, as firing properties of dopamine neurons can resemble RPEs. However, certain predictions of TD learning are inconsistent with experimental results, and previous implementations of the algorithm have made unscalable assumptions regarding stimulus-specific fixed temporal bases. We propose an alternate framework to describe dopamine signaling in the brain, FLEX (Flexibly Learned Errors in Expected Reward). In FLEX, dopamine release is similar, but not identical to RPE, leading to predictions that contrast to those of TD. While FLEX itself is a general theoretical framework, we describe a specific, biophysically plausible implementation, the results of which are consistent with a preponderance of both existing and reanalyzed experimental data.
Keywords
Reward, Dopaminergic Neurons, Dopamine, Animals, Algorithms, Learning, Models, Neurological, Humans, Reinforcement, Psychology, Brain, Neuronal Plasticity, Time Factors
DOI
10.1038/s41467-024-50205-3
PMID
38997276
PMCID
PMC11245539
PubMedCentral® Posted Date
July 2024
PubMedCentral® Full Text Version
Post-print
Published Open-Access
yes
Included in
Bioinformatics Commons, Biomedical Informatics Commons, Medical Sciences Commons, Oncology Commons
Comments
Associated Data
PMID: 38997276