Interpretable deep learning for deconvolutional analysis of neural signals.
Bahareh TolooshamsSara MatiasHao WuSimona TemereancaNaoshige UchidaVenkatesh N MurthyPaul MassetDemba BaPublished in: bioRxiv : the preprint server for biology (2024)
The widespread adoption of deep learning to build models that capture the dynamics of neural populations is typically based on ''black-box'' approaches that lack an interpretable link between neural activity and function. Here, we propose to apply algorithm unrolling, a method for interpretable deep learning, to design the architecture of sparse deconvolutional neural networks and obtain a direct interpretation of network weights in relation to stimulus-driven single neuron activity through a generative model. We characterize our method, referred to as deconvolutional unrolled neural learning (DUNL), and show its versatility by applying it to deconvolve single-trial local signals across multiple brain areas and recording modalities. To exemplify use cases of our decomposition method, we uncover multiplexed salience and reward prediction error signals from midbrain dopamine neurons in an unbiased manner, perform simultaneous event detection and characterization in somatosensory thalamus recordings, and characterize the responses of neurons in piriform cortex. Our work leverages the advances in interpretable deep learning to gain a mechanistic understanding of neural dynamics.
Keyphrases
- deep learning
- neural network
- artificial intelligence
- convolutional neural network
- machine learning
- spinal cord
- clinical trial
- functional connectivity
- randomized controlled trial
- study protocol
- single cell
- multiple sclerosis
- transcription factor
- white matter
- phase iii
- uric acid
- deep brain stimulation
- working memory
- label free