Login / Signup

Knowledge of animal appearance among sighted and blind adults.

Judy Sein KimGiulia V ElliMarina Bedny
Published in: Proceedings of the National Academy of Sciences of the United States of America (2019)
How does first-person sensory experience contribute to knowledge? Contrary to the suppositions of early empiricist philosophers, people who are born blind know about phenomena that cannot be perceived directly, such as color and light. Exactly what is learned and how remains an open question. We compared knowledge of animal appearance across congenitally blind (n = 20) and sighted individuals (two groups, n = 20 and n = 35) using a battery of tasks, including ordering (size and height), sorting (shape, skin texture, and color), odd-one-out (shape), and feature choice (texture). On all tested dimensions apart from color, sighted and blind individuals showed substantial albeit imperfect agreement, suggesting that linguistic communication and visual perception convey partially redundant appearance information. To test the hypothesis that blind individuals learn about appearance primarily by remembering sighted people's descriptions of what they see (e.g., "elephants are gray"), we measured verbalizability of animal shape, texture, and color in the sighted. Contrary to the learn-from-description hypothesis, blind and sighted groups disagreed most about the appearance dimension that was easiest for sighted people to verbalize: color. Analysis of disagreement patterns across all tasks suggest that blind individuals infer physical features from non-appearance properties of animals such as folk taxonomy and habitat (e.g., bats are textured like mammals but shaped like birds). These findings suggest that in the absence of sensory access, structured appearance knowledge is acquired through inference from ontological kind.
Keyphrases
  • healthcare
  • mental health
  • physical activity
  • contrast enhanced
  • depressive symptoms
  • magnetic resonance imaging
  • machine learning
  • climate change
  • computed tomography
  • deep learning
  • magnetic resonance
  • decision making