Coexistence of reward and unsupervised learning during the operant conditioning of neural firing rates

dc.contributor.authorKerr, Robert
dc.contributor.authorGrayden, David
dc.contributor.authorThomas, Doreen
dc.contributor.authorGilson, Matthieu
dc.contributor.authorBurkitt, Anthony
dc.date.accessioned2014-02-21T05:49:57Z
dc.date.available2014-02-21T05:49:57Z
dc.date.issued2014-01-27
dc.description.abstractA fundamental goal of neuroscience is to understand how cognitive processes, such as operant conditioning, are performed by the brain. Typical and well studied examples of operant conditioning, in which the firing rates of individual cortical neurons in monkeys are increased using rewards, provide an opportunity for insight into this. Studies of reward-modulated spike-timing-dependent plasticity (RSTDP), and of other models such as R-max, have reproduced this learning behavior, but they have assumed that no unsupervised learning is present (i.e., no learning occurs without, or independent of, rewards). We show that these models cannot elicit firing rate reinforcement while exhibiting both reward learning and ongoing, stable unsupervised learning. To fix this issue, we propose a new RSTDP model of synaptic plasticity based upon the observed effects that dopamine has on long-term potentiation and depression (LTP and LTD). We show, both analytically and through simulations, that our new model can exhibit unsupervised learning and lead to firing rate reinforcement. This requires that the strengthening of LTP by the reward signal is greater than the strengthening of LTD and that the reinforced neuron exhibits irregular firing. We show the robustness of our findings to spike-timing correlations, to the synaptic weight dependence that is assumed, and to changes in the mean reward. We also consider our model in the differential reinforcement of two nearby neurons. Our model aligns more strongly with experimental studies than previous models and makes testable predictions for future experiments.en_US
dc.description.sponsorshipFunding is acknowledged from the Australian Research Council (ARC Discovery Project DP1096699). The Bionics Institute acknowledges the support it receives from the Victorian Government through its Operational Infrastructure Support Program. This work was supported by the Australian Federal and Victorian State Governments and the Australian Research Council through the ICT Centre of Excellence program, National ICT Australia (NICTA). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.en_US
dc.identifier.citationKerr, R. R., Grayden, D. B., Thomas, D. A., Gilson, M., & Burkitt, A. N. (2014). Coexistence of reward and unsupervised learning during the operant conditioning of neural firing rates. PLoS ONE, 9(1), e87123. doi: 10.1371/journal.pone.0087123en_US
dc.identifier.otherdoi:10.1371/journal.pone.0087123
dc.identifier.urihttp://repository.bionicsinstitute.org:8080/handle/123456789/67
dc.language.isoen_USen_US
dc.publisherPLoS ONEen_US
dc.titleCoexistence of reward and unsupervised learning during the operant conditioning of neural firing ratesen_US
dc.typeArticleen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2014-CoexistenceRewardUnsupervisedLearning.pdf
Size:
1.12 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description:
Collections