Login / Signup

Automatic prediction of intelligible speaking rate for individuals with ALS from speech acoustic and articulatory samples.

Jun WangPrasanna V KothalkarMyungjong KimAndrea BandiniBeiming CaoYana YunusovaThomas F CampbellDaragh HeitzmanJordan R Green
Published in: International journal of speech-language pathology (2018)
Purpose: This research aimed to automatically predict intelligible speaking rate for individuals with Amyotrophic Lateral Sclerosis (ALS) based on speech acoustic and articulatory samples. Method: Twelve participants with ALS and two normal subjects produced a total of 1831 phrases. NDI Wave system was used to collect tongue and lip movement and acoustic data synchronously. A machine learning algorithm (i.e. support vector machine) was used to predict intelligible speaking rate (speech intelligibility × speaking rate) from acoustic and articulatory features of the recorded samples. Result: Acoustic, lip movement, and tongue movement information separately, yielded a R2 of 0.652, 0.660, and 0.678 and a Root Mean Squared Error (RMSE) of 41.096, 41.166, and 39.855 words per minute (WPM) between the predicted and actual values, respectively. Combining acoustic, lip and tongue information we obtained the highest R2 (0.712) and the lowest RMSE (37.562 WPM). Conclusion: The results revealed that our proposed analyses predicted the intelligible speaking rate of the participant with reasonably high accuracy by extracting the acoustic and/or articulatory features from one short speech sample. With further development, the analyses may be well-suited for clinical applications that require automatic speech severity prediction.
Keyphrases
  • amyotrophic lateral sclerosis
  • machine learning
  • deep learning
  • hearing loss
  • big data
  • artificial intelligence
  • neural network