Neural Dynamics of Motion Integration and Segmentation within and across Apertures

OpenBU

Show simple item record

dc.contributor.author Grossberg, Stephen en_US
dc.contributor.author Mingolla, Ennio en_US
dc.contributor.author Viswanathan, Lavanya en_US
dc.date.accessioned 2011-11-14T19:00:16Z
dc.date.available 2011-11-14T19:00:16Z
dc.date.issued 2000-02 en_US
dc.identifier.uri http://hdl.handle.net/2144/2252
dc.description.abstract A neural model is developed of how motion integration and segmentation processes, both within and across apertures, compute global motion percepts. Figure-ground properties, such as occlusion, influence which motion signals determine the percept. For visible apertures, a line's terminators do not specify true line motion. For invisible apertures, a line's intrinsic terminators create veridical feature tracking signals. Sparse feature tracking signals can be amplified before they propagate across position and are integrated with ambiguous motion signals within line interiors. This integration process determines the global percept. It is the result of several processing stages: Directional transient cells respond to image transients and input to a directional short-range filter that selectively boosts feature tracking signals with the help of competitive signals. Then a long-range filter inputs to directional cells that pool signals over multiple orientations, opposite contrast polarities, and depths. This all happens no later than cortical area MT. The directional cells activate a directional grouping network, proposed to occur within cortical area MST, within which directions compete to determine a local winner. Enhanced feature tracking signals typically win over ambiguous motion signals. Model MST cells which encode the winning direction feed back to model MT cells, where they boost directionally consistent cell activities and suppress inconsistent activities over the spatial region to which they project. This feedback accomplishes directional and depthful motion capture within that region. Model simulations include the barberpole illusion, motion capture, the spotted barberpole, the triple barberpole, the occluded translating square illusion, motion transparency and the chopsticks illusion. Qualitative explanations of illusory contours from translating terminators and plaid adaptation are also given. en_US
dc.description.sponsorship Defense Advanced Research Porjects Agency and the Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI-97-20333, IRI-94-01659); Office of Naval Research (N00014-92-J-1309, N00014-95-1-0657) en_US
dc.language.iso en_US en_US
dc.publisher Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems en_US
dc.relation.ispartofseries BU CAS/CNS Technical Reports;CAS/CNS-TR-2000-004 en_US
dc.rights Copyright 2000 Boston University. Permission to copy without fee all or part of this material is granted provided that: 1. The copies are not made or distributed for direct commercial advantage; 2. the report title, author, document number, and release date appear, and notice is given that copying is by permission of BOSTON UNIVERSITY TRUSTEES. To copy otherwise, or to republish, requires a fee and / or special permission. en_US
dc.subject Motion integration en_US
dc.subject Motion segmentation en_US
dc.subject Motion capture en_US
dc.subject Aperture problem en_US
dc.subject Feature tracking en_US
dc.subject MT en_US
dc.subject MST en_US
dc.subject Neural network en_US
dc.title Neural Dynamics of Motion Integration and Segmentation within and across Apertures en_US
dc.type Technical Report en_US
dc.rights.holder Boston University Trustees en_US

Files in this item

This item appears in the following Collection(s)

Show simple item record

Search OpenBU


Advanced Search

Browse

Deposit Materials