Login / Signup

Letter perception emerges from unsupervised deep learning and recycling of natural image features.

Alberto TestolinIvilin Peev StoianovMarco Zorzi
Published in: Nature human behaviour (2017)
The use of written symbols is a major achievement of human cultural evolution. However, how abstract letter representations might be learned from vision is still an unsolved problem 1,2 . Here, we present a large-scale computational model of letter recognition based on deep neural networks 3,4 , which develops a hierarchy of increasingly more complex internal representations in a completely unsupervised way by fitting a probabilistic, generative model to the visual input 5,6 . In line with the hypothesis that learning written symbols partially recycles pre-existing neuronal circuits for object recognition 7 , earlier processing levels in the model exploit domain-general visual features learned from natural images, while domain-specific features emerge in upstream neurons following exposure to printed letters. We show that these high-level representations can be easily mapped to letter identities even for noise-degraded images, producing accurate simulations of a broad range of empirical findings on letter perception in human observers. Our model shows that by reusing natural visual primitives, learning written symbols only requires limited, domain-specific tuning, supporting the hypothesis that their shape has been culturally selected to match the statistical structure of natural environments 8 .
Keyphrases
  • deep learning
  • working memory
  • endothelial cells
  • machine learning
  • convolutional neural network
  • neural network
  • induced pluripotent stem cells
  • high resolution
  • mass spectrometry
  • spinal cord injury
  • monte carlo