Show simple item record

dc.contributor.authorBerzhanskaya, J.en_US
dc.contributor.authorGrossberg, S.en_US
dc.contributor.authorMingolla, E.en_US
dc.date.accessioned2011-11-14T18:19:28Z
dc.date.available2011-11-14T18:19:28Z
dc.date.issued2007-01
dc.identifier.urihttps://hdl.handle.net/2144/2045
dc.description.abstractHow do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? Consider, for example, a deer moving behind a bush. Here the partially occluded fragments of motion signals available to an observer must be coherently grouped into the motion of a single object. A 3D FORMOTION model comprises five important functional interactions involving the brain’s form and motion systems that address such situations. Because the model’s stages are analogous to areas of the primate visual system, we refer to the stages by corresponding anatomical names. In one of these functional interactions, 3D boundary representations, in which figures are separated from their backgrounds, are formed in cortical area V2. These depth-selective V2 boundaries select motion signals at the appropriate depths in MT via V2-to-MT signals. In another, motion signals in MT disambiguate locally incomplete or ambiguous boundary signals in V2 via MT-to-V1-to-V2 feedback. The third functional property concerns resolution of the aperture problem along straight moving contours by propagating the influence of unambiguous motion signals generated at contour terminators or corners. Here, sparse “feature tracking signals” from, e.g., line ends, are amplified to overwhelm numerically superior ambiguous motion signals along line segment interiors. In the fourth, a spatially anisotropic motion grouping process takes place across perceptual space via MT-MST feedback to integrate veridical feature-tracking and ambiguous motion signals to determine a global object motion percept. The fifth property uses the MT-MST feedback loop to convey an attentional priming signal from higher brain areas back to V1 and V2. The model's use of mechanisms such as divisive normalization, endstopping, cross-orientation inhibition, and longrange cooperation is described. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. non-rigid appearance of rotating ellipses.en_US
dc.description.sponsorshipAir Force Office of Scientific Research (F49620-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (BCS-0235398, SBE-0354378); Office of Naval Research (N00014-95-1-0409, N00014-01-1-0624)en_US
dc.publisherBoston University Center for Adaptive Systems and Department of Cognitive and Neural Systemsen_US
dc.relation.ispartofseriesBU CAS/CNS Technical Reports;CAS/CNS-TR-2006-003
dc.rightsCopyright 2006 Boston University. Permission to copy without fee all or part of this material is granted provided that: 1. The copies are not made or distributed for direct commercial advantage; 2. the report title, author, document number, and release date appear, and notice is given that copying is by permission of BOSTON UNIVERSITY TRUSTEES. To copy otherwise, or to republish, requires a fee and / or special permission.en_US
dc.subjectMotion perceptionen_US
dc.subjectDepth perceptionen_US
dc.subjectPerceptual groupingen_US
dc.subjectPrestriate cortexen_US
dc.subjectV1en_US
dc.subjectV2en_US
dc.subjectMTen_US
dc.subjectMSTen_US
dc.titleLaminar Cortical Dynamics of Visual Form and Motion Interactions During Coherent Object Motion Perceptionen_US
dc.typeTechnical Reporten_US
dc.rights.holderBoston University Trusteesen_US
dc.relation.isnodouble1719


This item appears in the following Collection(s)

Show simple item record