Laying the Foundation: Modern Transformers for Gold-Standard Sleep Analysis.
William G CoonMattson OggPublished in: bioRxiv : the preprint server for biology (2024)
Accurate sleep assessment is critical to the practice of sleep medicine and sleep research. Excitingly, the recent availability of large quantities of publicly available sleep data present opportunities for data-driven discovery efforts. Transformers are flexible neural network architectures that not only excel at classification tasks, but also can enable data-driven discovery through un- or self-supervised learning, which requires no human annotations to the input data. The first step for data-driven discovery in sleep data with transformers is to train a model on a supervised learning task (here, scoring sleep) to form the foundation for unsupervised extensions; it is not yet clear which model optimizations are ideal for this. To address this gap and lay the groundwork for data-driven discovery, we explored optimizations of a transformer-based model that learned the canonical 5-class sleep stage classification task. We examined different configurations of model size, input data size (sleep sequence length), input channels, and the use of multiple or single nights of sleep during training. We found that most configurations met or exceeded clinical criteria (accuracy/agreement above approximately 80--85%). Curiously, they achieve this when operating over as little as 10 minutes of input data at a time, which would preclude the use of valuable sleep cycle history that a human expert typically uses. Compared to most recurrent neural network analogues, the attention-based transformer was better able to distinguish REM sleep from neighboring classes using EEG alone. Finally, the full-size transformer model did not significantly outperform smaller versions, indicating that lightweight versions are sufficient for this task. In summary, we found that small transformer models can match and even outclass other state-of-the-art automated sleep scoring techniques, while also providing the basis for future data-driven discovery efforts using large sleep data sets, with or without human annotations.