A self-supervised domain-general learning framework for human ventral stream representation.
Talia KonkleGeorge A AlvarezPublished in: Nature communications (2022)
Anterior regions of the ventral visual stream encode substantial information about object categories. Are top-down category-level forces critical for arriving at this representation, or can this representation be formed purely through domain-general learning of natural image structure? Here we present a fully self-supervised model which learns to represent individual images, rather than categories, such that views of the same image are embedded nearby in a low-dimensional feature space, distinctly from other recently encountered views. We find that category information implicitly emerges in the local similarity structure of this feature space. Further, these models learn hierarchical features which capture the structure of brain responses across the human ventral visual stream, on par with category-supervised models. These results provide computational support for a domain-general framework guiding the formation of visual representation, where the proximate goal is not explicitly about category information, but is instead to learn unique, compressed descriptions of the visual world.
Keyphrases
- machine learning
- deep learning
- endothelial cells
- spinal cord
- neural network
- deep brain stimulation
- health information
- induced pluripotent stem cells
- prefrontal cortex
- convolutional neural network
- pluripotent stem cells
- working memory
- white matter
- resting state
- blood brain barrier
- social media
- optical coherence tomography
- functional connectivity