Free recall scaling laws and short-term memory effects in a latching attractor network.
Vezha BoboevaAlberto PezzottaClaudia ClopathPublished in: Proceedings of the National Academy of Sciences of the United States of America (2021)
Despite the complexity of human memory, paradigms like free recall have revealed robust qualitative and quantitative characteristics, such as power laws governing recall capacity. Although abstract random matrix models could explain such laws, the possibility of their implementation in large networks of interacting neurons has so far remained underexplored. We study an attractor network model of long-term memory endowed with firing rate adaptation and global inhibition. Under appropriate conditions, the transitioning behavior of the network from memory to memory is constrained by limit cycles that prevent the network from recalling all memories, with scaling similar to what has been found in experiments. When the model is supplemented with a heteroassociative learning rule, complementing the standard autoassociative learning rule, as well as short-term synaptic facilitation, our model reproduces other key findings in the free recall literature, namely, serial position effects, contiguity and forward asymmetry effects, and the semantic effects found to guide memory recall. The model is consistent with a broad series of manipulations aimed at gaining a better understanding of the variables that affect recall, such as the role of rehearsal, presentation rates, and continuous and/or end-of-list distractor conditions. We predict that recall capacity may be increased with the addition of small amounts of noise, for example, in the form of weak random stimuli during recall. Finally, we predict that, although the statistics of the encoded memories has a strong effect on the recall capacity, the power laws governing recall capacity may still be expected to hold.