Neural Dynamics of Motion Grouping: From Aperture Ambiguity to Object Speed and Direction

OpenBU

Show simple item record

dc.contributor.author Chey, Jonathan en_US
dc.contributor.author Grossberg, Stephen en_US
dc.contributor.author Mingolla, Ennio en_US
dc.date.accessioned 2011-11-14T18:50:19Z
dc.date.available 2011-11-14T18:50:19Z
dc.date.issued 1995-11 en_US
dc.identifier.uri http://hdl.handle.net/2144/2208
dc.description.abstract A neural network model of visual motion perception and speed discrimination is developed to simulate data concerning the conditions under which components of moving stimuli cohere or not into a global direction of motion, as in barberpole and plaid patterns (both Type 1 and Type 2). The model also simulates how the perceived speed of lines moving in a prescribed direction depends upon their orientation, length, duration, and contrast. Motion direction and speed both emerge as part of an interactive motion grouping or segmentation process. The model proposes a solution to the global aperture problem by showing how information from feature tracking points, namely locations from which unambiguous motion directions can be computed, can propagate to ambiguous motion direction points, and capture the motion signals there. The model does this without computing intersections of constraints or parallel Fourier and non-Fourier pathways. Instead, the model uses orientationally-unselective cell responses to activate directionally-tuned transient cells. These transient cells, in turn, activate spatially short-range filters and competitive mechanisms over multiple spatial scales to generate speed-tuned and directionally-tuned cells. Spatially long-range filters and top-down feedback from grouping cells are then used to track motion of featural points and to select and propagate correct motion directions to ambiguous motion points. Top-down grouping can also prime the system to attend a particular motion direction. The model hereby links low-level automatic motion processing with attention-based motion processing. Homologs of model mechanisms have been used in models of other brain systems to simulate data about visual grouping, figure-ground separation, and speech perception. Earlier versions of the model have simulated data about short-range and long-range apparent motion, second-order motion, and the effects of parvocellular and magnocellular LGN lesions on motion perception. en_US
dc.description.sponsorship Office of Naval Research (N00014-920J-4015, N00014-91-J-4100, N00014-95-1-0657, N00014-95-1-0409, N00014-91-J-0597); Air Force Office of Scientific Research (F4620-92-J-0225, F49620-92-J-0499); National Science Foundation (IRI-90-00530) en_US
dc.language.iso en_US en_US
dc.publisher Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems en_US
dc.relation.ispartofseries BU CAS/CNS Technical Reports;CAS/CNS-TR-1995-031 en_US
dc.rights Copyright 1995 Boston University. Permission to copy without fee all or part of this material is granted provided that: 1. The copies are not made or distributed for direct commercial advantage; 2. the report title, author, document number, and release date appear, and notice is given that copying is by permission of BOSTON UNIVERSITY TRUSTEES. To copy otherwise, or to republish, requires a fee and / or special permission. en_US
dc.subject Vision en_US
dc.subject Motion perception en_US
dc.subject Directional grouping en_US
dc.subject Speed perception en_US
dc.subject Aperture problem en_US
dc.subject Attention en_US
dc.subject Visual cortex en_US
dc.subject Neural netwrok en_US
dc.title Neural Dynamics of Motion Grouping: From Aperture Ambiguity to Object Speed and Direction en_US
dc.type Technical Report en_US
dc.rights.holder Boston University Trustees en_US

Files in this item

This item appears in the following Collection(s)

Show simple item record

Search OpenBU


Advanced Search

Browse

Deposit Materials