Login / Signup

Predictive feedback to V1 dynamically updates with sensory input.

Grace EdwardsPetra VetterFiona McGruerLucy S PetroLars Muckli
Published in: Scientific reports (2017)
Predictive coding theories propose that the brain creates internal models of the environment to predict upcoming sensory input. Hierarchical predictive coding models of vision postulate that higher visual areas generate predictions of sensory inputs and feed them back to early visual cortex. In V1, sensory inputs that do not match the predictions lead to amplified brain activation, but does this amplification process dynamically update to new retinotopic locations with eye-movements? We investigated the effect of eye-movements in predictive feedback using functional brain imaging and eye-tracking whilst presenting an apparent motion illusion. Apparent motion induces an internal model of motion, during which sensory predictions of the illusory motion feed back to V1. We observed attenuated BOLD responses to predicted stimuli at the new post-saccadic location in V1. Therefore, pre-saccadic predictions update their retinotopic location in time for post-saccadic input, validating dynamic predictive coding theories in V1.
Keyphrases
  • resting state
  • white matter
  • functional connectivity
  • high speed
  • cerebral ischemia
  • diffusion weighted imaging
  • case report
  • atomic force microscopy
  • photodynamic therapy
  • mass spectrometry
  • single molecule