A Neural Model of Multimodal Adaptive Saccadic Eye Movement Control by Superior Colliculus
MetadataShow full item record
How does the saccadic movement system select a target when visual, auditory, and planned movement commands differ? How do retinal, head-centered, and motor error coordinates interact during the selection process? Recent data on superior colliculus (SC) reveal a spreading wave of activation across buildup cells whose peak activity covaries with the current gaze error. In contrast, the locus of peak activity remains constant at burst cells while their activity level decays with residual gaze error. A neural model answers these questions and simulates burst and buildup responses in visual, overlap, memory, and gap tasks. The model also simulates data on multimodal enha.ncement and suppression of activity in the deeper SC layers and suggests a functional role for NMDA receptors in this region. In particular, the model suggests how auditory and planned saccadic target positions become aligned and compete with visually reactive target positions to select a movement command. For this to occur, a transformation between auditory and planned head-centercd representations and a retinotopic target representation is learned. Burst cells in the model generate teaching signals to the spreading wave layer. Spreading waves are produced by corollary discharges that render planned and visually reactive targets dimensionally consistent and enable them to compete for attention to generate a movement command in motor error coordinates. The attentional selection process also helps to stabilize the map learning process. The model functionally interprets cdlo in the superior colliculus, frontal eye field, parietal cortex, mesencephalic reticular formation, paramedian pontine reticular formation, and substantia nigra pars reticulata.