About the Bionics Institute


Bionics Institute Research Online >
Other staff research publications >
Other research publications >

Title: Coexistence of reward and unsupervised learning during the operant conditioning of neural firing rates
Authors: Kerr, Robert
Grayden, David
Thomas, Doreen
Gilson, Matthieu
Burkitt, Anthony
Issue Date: 27-Jan-2014
Publisher: PLoS ONE
Citation: Kerr, R. R., Grayden, D. B., Thomas, D. A., Gilson, M., & Burkitt, A. N. (2014). Coexistence of reward and unsupervised learning during the operant conditioning of neural firing rates. PLoS ONE, 9(1), e87123. doi: 10.1371/journal.pone.0087123
Abstract: A fundamental goal of neuroscience is to understand how cognitive processes, such as operant conditioning, are performed by the brain. Typical and well studied examples of operant conditioning, in which the firing rates of individual cortical neurons in monkeys are increased using rewards, provide an opportunity for insight into this. Studies of reward-modulated spike-timing-dependent plasticity (RSTDP), and of other models such as R-max, have reproduced this learning behavior, but they have assumed that no unsupervised learning is present (i.e., no learning occurs without, or independent of, rewards). We show that these models cannot elicit firing rate reinforcement while exhibiting both reward learning and ongoing, stable unsupervised learning. To fix this issue, we propose a new RSTDP model of synaptic plasticity based upon the observed effects that dopamine has on long-term potentiation and depression (LTP and LTD). We show, both analytically and through simulations, that our new model can exhibit unsupervised learning and lead to firing rate reinforcement. This requires that the strengthening of LTP by the reward signal is greater than the strengthening of LTD and that the reinforced neuron exhibits irregular firing. We show the robustness of our findings to spike-timing correlations, to the synaptic weight dependence that is assumed, and to changes in the mean reward. We also consider our model in the differential reinforcement of two nearby neurons. Our model aligns more strongly with experimental studies than previous models and makes testable predictions for future experiments.
URI: http://repository.bionicsinstitute.org:8080/handle/123456789/67
Appears in Collections:Other research publications

Files in This Item:

File Description SizeFormat
2014-CoexistenceRewardUnsupervisedLearning.pdf1.14 MBAdobe PDFView/Open
View Statistics

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.


DSpace Software Copyright © 2002-2010  Duraspace -