Show simple item record

dc.contributor.authorGrossberg, Stephenen_US
dc.date.accessioned2011-11-14T18:17:06Z
dc.date.available2011-11-14T18:17:06Z
dc.date.issued2007-09
dc.identifier.urihttps://hdl.handle.net/2144/1952
dc.description.abstractWhen brain mechanism carry out motion integration and segmentation processes that compute unambiguous global motion percepts from ambiguous local motion signals? Consider, for example, a deer running at variable speeds behind forest cover. The forest cover is an occluder that creates apertures through which fragments of the deer's motion signals are intermittently experienced. The brain coherently groups these fragments into a trackable percept of the deer in its trajectory. Form and motion processes are needed to accomplish this using feedforward and feedback interactions both within and across cortical processing streams. All the cortical areas V1, V2, MT, and MST are involved in these interactions. Figure-ground processes in the form stream through V2, such as the seperation of occluding boundaries of the forest cover from the boundaries of the deer, select the motion signals which determine global object motion percepts in the motion stream through MT. Sparse, but unambiguous, feauture tracking signals are amplified before they propogate across position and are intergrated with far more numerous ambiguous motion signals. Figure-ground and integration processes together determine the global percept. A neural model predicts the processing stages that embody these form and motion interactions. Model concepts and data are summarized about motion grouping across apertures in response to a wide variety of displays, and probabilistic decision making in parietal cortex in response to random dot displays.en_US
dc.description.sponsorshipNational Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624)en_US
dc.language.isoen_US
dc.publisherBoston University Center for Adaptive Systems and Department of Cognitive and Neural Systemsen_US
dc.relation.ispartofseriesBU CAS/CNS Technical Reports;CAS/CNS-TR-2007-014
dc.rightsCopyright 2007 Boston University. Permission to copy without fee all or part of this material is granted provided that: 1. The copies are not made or distributed for direct commercial advantage; 2. the report title, author, document number, and release date appear, and notice is given that copying is by permission of BOSTON UNIVERSITY TRUSTEES. To copy otherwise, or to republish, requires a fee and / or special permission.en_US
dc.subjectMotion integrationen_US
dc.subjectMotion segmentationen_US
dc.subjectMotion captureen_US
dc.subjectDecision-makingen_US
dc.subjectAperture problemen_US
dc.subjectFeature trackingen_US
dc.subjectFormationen_US
dc.subjectComplementary computingen_US
dc.subjectV1en_US
dc.subjectV2en_US
dc.subjectMTen_US
dc.subjectMSTen_US
dc.subjectLIPen_US
dc.subjectNeural networksen_US
dc.titleNeural Models of Motion Integration, Segmentation, and Probablistic Decision-Makingen_US
dc.typeTechnical Reporten_US
dc.rights.holderBoston University Trusteesen_US


This item appears in the following Collection(s)

Show simple item record