Login / Signup

Vector-Space Models of Semantic Representation From a Cognitive Perspective: A Discussion of Common Misconceptions.

Fritz GüntherLuca RinaldiMarco Marelli
Published in: Perspectives on psychological science : a journal of the Association for Psychological Science (2019)
Models that represent meaning as high-dimensional numerical vectors-such as latent semantic analysis (LSA), hyperspace analogue to language (HAL), bound encoding of the aggregate language environment (BEAGLE), topic models, global vectors (GloVe), and word2vec-have been introduced as extremely powerful machine-learning proxies for human semantic representations and have seen an explosive rise in popularity over the past 2 decades. However, despite their considerable advancements and spread in the cognitive sciences, one can observe problems associated with the adequate presentation and understanding of some of their features. Indeed, when these models are examined from a cognitive perspective, a number of unfounded arguments tend to appear in the psychological literature. In this article, we review the most common of these arguments and discuss (a) what exactly these models represent at the implementational level and their plausibility as a cognitive theory, (b) how they deal with various aspects of meaning such as polysemy or compositionality, and (c) how they relate to the debate on embodied and grounded cognition. We identify common misconceptions that arise as a result of incomplete descriptions, outdated arguments, and unclear distinctions between theory and implementation of the models. We clarify and amend these points to provide a theoretical basis for future research and discussions on vector models of semantic representation.
Keyphrases
  • machine learning
  • endothelial cells
  • autism spectrum disorder
  • healthcare
  • mental health
  • working memory
  • palliative care
  • case report
  • mild cognitive impairment