Local online learning in recurrent networks with random feedback.
James M MurrayPublished in: eLife (2019)
Recurrent neural networks (RNNs) enable the production and processing of time-dependent signals such as those involved in movement or working memory. Classic gradient-based algorithms for training RNNs have been available for decades, but are inconsistent with biological features of the brain, such as causality and locality. We derive an approximation to gradient-based learning that comports with these constraints by requiring synaptic weight updates to depend only on local information about pre- and postsynaptic activities, in addition to a random feedback projection of the RNN output error. In addition to providing mathematical arguments for the effectiveness of the new learning rule, we show through simulations that it can be used to train an RNN to perform a variety of tasks. Finally, to overcome the difficulty of training over very large numbers of timesteps, we propose an augmented circuit architecture that allows the RNN to concatenate short-duration patterns into longer sequences.
Keyphrases
- working memory
- neural network
- virtual reality
- transcranial direct current stimulation
- attention deficit hyperactivity disorder
- health information
- machine learning
- randomized controlled trial
- systematic review
- body mass index
- weight loss
- white matter
- social media
- resting state
- deep learning
- physical activity
- high speed
- healthcare
- emergency department
- cerebral ischemia
- adverse drug
- computed tomography
- magnetic resonance imaging
- weight gain
- functional connectivity
- subarachnoid hemorrhage
- monte carlo
- brain injury
- network analysis