Login / Signup

Computational models can distinguish the contribution from different mechanisms to familiarity recognition.

John ReadEmma DelhayeJacques Sougné
Published in: Hippocampus (2023)
Familiarity is the strange feeling of knowing that something has already been seen in our past. Over the past decades, several attempts have been made to model familiarity using artificial neural networks. Recently, two learning algorithms successfully reproduced the functioning of the perirhinal cortex, a key structure involved during familiarity: Hebbian and anti-Hebbian learning. However, performance of these learning rules is very different from one to another thus raising the question of their complementarity. In this work, we designed two distinct computational models that combined Deep Learning and a Hebbian learning rule to reproduce familiarity on natural images, the Hebbian model and the anti-Hebbian model, respectively. We compared the performance of both models during different simulations to highlight the inner functioning of both learning rules. We showed that the anti-Hebbian model fits human behavioral data whereas the Hebbian model fails to fit the data under large training set sizes. Besides, we observed that only our Hebbian model is highly sensitive to homogeneity between images. Taken together, we interpreted these results considering the distinction between absolute and relative familiarity. With our framework, we proposed a novel way to distinguish the contribution of these familiarity mechanisms to the overall feeling of familiarity. By viewing them as complementary, our two models allow us to make new testable predictions that could be of interest to shed light on the familiarity phenomenon.
Keyphrases
  • deep learning
  • machine learning
  • endothelial cells
  • electronic health record
  • artificial intelligence
  • molecular dynamics
  • optical coherence tomography
  • virtual reality