Login / Signup

Data-driven segmentation of audiometric phenotypes across a large clinical cohort.

Aravindakshan ParthasarathySandra Romero PintoRebecca M LewisWilliam GoedickeDaniel B Polley
Published in: Scientific reports (2020)
Pure tone audiograms are used to assess the degree and underlying source of hearing loss. Audiograms are typically categorized into a few canonical types, each thought to reflect distinct pathologies of the ear. Here, we analyzed 116,400 patient records from our clinic collected over a 24-year period and found that standard categorization left 46% of patient records unclassified. To better account for the full spectrum of hearing loss profiles, we used a Gaussian Mixture Model (GMM) to segment audiograms without any assumptions about frequency relationships, interaural symmetry or etiology. The GMM converged on ten types, featuring varying degrees of high-frequency hearing loss, flat loss, mixed loss, and notched profiles, with predictable relationships to patient age and sex. A separate GMM clustering of 15,380 audiograms from the National Health and Nutrition Examination Survey (NHANES) identified six similar types, that only lacked the more extreme hearing loss configurations observed in our patient cohort. Whereas traditional approaches distill hearing loss configurations down to a few canonical types by disregarding much of the underlying variability, an objective probabilistic model that accounted for all of the data identified an organized, but more heterogenous set of audiogram types that was consistent across two large clinical databases.
Keyphrases
  • hearing loss
  • high frequency
  • case report
  • transcranial magnetic stimulation
  • primary care
  • big data
  • deep learning
  • artificial intelligence
  • single cell