A Self-Organizing Neural Network Architecture for Navigation Using Optic Flow
Files
Date
1995-12
DOI
Authors
Cameron, Seth
Grossberg, Stephen
Guenther, Frank H.
Version
OA Version
Citation
Abstract
This paper describes a self-organizing neural network architecture that transforms optic now information into representations of heading, scene depth, and moving object locations. These representations are used to reactively navigate in simulations involving obstacle avoidance and pursuit of a moving target. The network's weights are trained during an action-perception cycle in which self-generated eye and body movements produce optic flow information, thus allowing the network to tunc itself without requiring explicit knowledge of sensor geometry. The confounding effect of eye movement during translation is suppressed by learning the relationship between eye movement outflow commands and the optic flow signals that they induce. The remaining optic flow field is due only to observer translation and independent motion of objects in the scene. A self-organizing feature map categorizes normalized translational flow patterns, thereby creating a map of cells that code heading directions. Heading information is then recombined with translational flow patterns in two different ways to form maps of scene depth and moving object locations. All learning processes take place concurrently and require no external "teachers." Simulations of the network verify its performance using both noise-free and noisy optic flow information.
Description
License
Copyright 1995 Boston University. Permission to copy without fee all or part of this material is granted provided that: 1. The copies are not made or distributed for direct commercial advantage; 2. the report title, author, document number, and release date appear, and notice is given that copying is by permission of BOSTON UNIVERSITY TRUSTEES. To copy otherwise, or to republish, requires a fee and / or special permission.