A longitudinal analysis of the social information in infants' naturalistic visual experience using automated detections.
Bria L LongGeorge KachergisKetan AgrawalMichael C FrankPublished in: Developmental psychology (2022)
The faces and hands of caregivers and other social partners offer a rich source of social and causal information that is likely critical for infants' cognitive and linguistic development. Previous work using manual annotation strategies and cross-sectional data has found systematic changes in the proportion of faces and hands in the egocentric perspective of young infants. Here, we validated the use of a modern convolutional neural network (OpenPose) for the detection of faces and hands in naturalistic egocentric videos. We then applied this model to a longitudinal collection of more than 1,700 head-mounted camera videos from three children ages 6 to 32 months. Using these detections, we confirm and extend prior results from cross-sectional studies. First, we found a moderate decrease in the proportion of faces in children's view across age and a higher proportion of hands in view than previously reported. Second, we found variability in the proportion of faces and hands viewed by different children in different locations (e.g., living room vs. kitchen), suggesting that individual activity contexts may shape the social information that infants experience. Third, we found evidence that children may see closer, larger views of people, hands, and faces earlier in development. These longitudinal analyses provide an additional perspective on the changes in the social information in view across the first few years of life and suggest that pose detection models can successfully be applied to naturalistic egocentric video data sets to extract descriptives about infants' changing social environment. (PsycInfo Database Record (c) 2022 APA, all rights reserved).