The effect of augmented input on the auditory comprehension of narratives for people with aphasia: a pilot investigation.
Shakila DadaNicola StockleySarah E WallaceRajinder KoulPublished in: Augmentative and alternative communication (Baltimore, Md. : 1985) (2019)
Augmented input is the strategy of supplementing expressive language with visuographic images, print, gestures, or objects in the environment. The goal of augmented input is to facilitate comprehension of spoken language. The purpose of this study was to evaluate the relative effectiveness of two different augmented input conditions in facilitating auditory comprehension of narrative passages in adults with aphasia. One condition involved the communication partner (clinician) of the adult with aphasia actively pointing (AI-PP) out key content words using visuographic supports. The second condition involved no active pointing (AI-NPP) by the communication partner (i.e., attention was not drawn to the visuographic supports). All 12 participants with aphasia listened to two narratives; one in each condition. Auditory comprehension was measured by assessing participants' accuracy in responding to 15 multiple-choice cloze-type statements related to the narratives. Of the 12 participants, seven gave more accurate responses to comprehension items in the AI-PP condition, four gave more accurate responses in the AI-NPP condition, and one scored the same in both conditions. These differences were not statistically significant (p > 0.05). Communication-partner-referenced augmented input using combined high-context and PCS symbol visuographic supports improved response accuracy for some participants. Continued research is necessary to determine partner involvement with and frequency of augmented input that improve auditory comprehension.