Login / Signup

Learning to represent signals spike by spike.

Wieland BrendelRalph BourdoukanPietro VertechiChristian K MachensSophie Denève
Published in: PLoS computational biology (2020)
Networks based on coordinated spike coding can encode information with high efficiency in the spike trains of individual neurons. These networks exhibit single-neuron variability and tuning curves as typically observed in cortex, but paradoxically coincide with a precise, non-redundant spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these networks can be learnt with local learning rules. Here, we show how to learn the required architecture. Using coding efficiency as an objective, we derive spike-timing-dependent learning rules for a recurrent neural network, and we provide exact solutions for the networks' convergence to an optimal state. As a result, we deduce an entire network from its input distribution and a firing cost. After learning, basic biophysical quantities such as voltages, firing thresholds, excitation, inhibition, or spikes acquire precise functional interpretations.
Keyphrases
  • high efficiency
  • neural network
  • spinal cord
  • high resolution
  • spinal cord injury
  • density functional theory