NeoSSNet: Real-Time Neonatal Chest Sound Separation Using Deep Learning.
Yang Yi PohEthan GroobyKenneth TanLindsay ZhouArrabella KingAshwin RamanathanAtul MalhotraMehrtash HarandiFaezeh MarzbanradPublished in: IEEE open journal of engineering in medicine and biology (2024)
Goal: Auscultation for neonates is a simple and non-invasive method of diagnosing cardiovascular and respiratory disease. However, obtaining high-quality chest sounds containing only heart or lung sounds is non-trivial. Hence, this study introduces a new deep-learning model named NeoSSNet and evaluates its performance in neonatal chest sound separation with previous methods. Methods: We propose a masked-based architecture similar to Conv-TasNet. The encoder and decoder consist of 1D convolution and 1D transposed convolution, while the mask generator consists of a convolution and transformer architecture. The input chest sounds were first encoded as a sequence of tokens using 1D convolution. The tokens were then passed to the mask generator to generate two masks, one for heart sounds and one for lung sounds. Each mask is then applied to the input token sequence. Lastly, the tokens are converted back to waveforms using 1D transposed convolution. Results: Our proposed model showed superior results compared to the previous methods based on objective distortion measures, ranging from a 2.01 dB improvement to a 5.06 dB improvement. The proposed model is also significantly faster than the previous methods, with at least a 17-time improvement. Conclusions: The proposed model could be a suitable preprocessing step for any health monitoring system where only the heart sound or lung sound is desired.