A Neural Network Architecture for Figure-ground Separation of Connected Scenic Figures

OpenBU

Show simple item record

dc.contributor.author Grossbergy, Stephen en_US
dc.contributor.author Wyse, Lonce en_US
dc.date.accessioned 2011-11-14T18:21:47Z
dc.date.available 2011-11-14T18:21:47Z
dc.date.issued 1991-05 en_US
dc.identifier.uri http://hdl.handle.net/2144/2067
dc.description.abstract A neural network model, called an FBF network, is proposed for automatic parallel separation of multiple image figures from each other and their backgrounds in noisy grayscale or multi-colored images. The figures can then be processed in parallel by an array of self-organizing Adaptive Resonance Theory (ART) neural networks for automatic target recognition. An FBF network can automatically separate the disconnected but interleaved spirals that Minsky and Papert introduced in their book Perceptrons. The network's design also clarifies why humans cannot rapidly separate interleaved spirals, yet can rapidly detect conjunctions of disparity and color, or of disparity and motion, that distinguish target figures from surrounding distractors. Figure-ground separation is accomplished by iterating operations of a Feature Contour System (FCS) and a Boundary Contour System (BCS) in the order FCS-BCS-FCS, hence the term FBF, that have been derived from an analysis of biological vision. The FCS operations include the use of nonlinear shunting networks to compensate for variable illumination and nonlinear diffusion networks to control filling-in. A key new feature of an FBF network is the use of filling-in for figure-ground separation. The BCS operations include oriented filters joined to competitive and cooperative interactions designed to detect, regularize, and complete boundaries in up to 50 percent noise, while suppressing the noise. A modified CORT-X filter is described which uses both on-cells and off-cells to generate a boundary segmentation from a noisy image. en_US
dc.description.sponsorship Air Force Office of Scientific Research (90-0175); Army Research Office (DAAL-03-88-K0088); Defense Advanced Research Projects Agency (90-0083); Hughes Research Laboratories (S1-804481-D, S1-903136); American Society for Engineering Education en_US
dc.language.iso en_US en_US
dc.publisher Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems en_US
dc.relation.ispartofseries BU CAS/CNS Technical Reports;CAS/CNS-TR-1991-012 en_US
dc.rights Copyright 1991 Boston University. Permission to copy without fee all or part of this material is granted provided that: 1. The copies are not made or distributed for direct commercial advantage; 2. the report title, author, document number, and release date appear, and notice is given that copying is by permission of BOSTON UNIVERSITY TRUSTEES. To copy otherwise, or to republish, requires a fee and / or special permission. en_US
dc.subject Vision en_US
dc.subject Sensor fusion en_US
dc.subject Figure-ground separation en_US
dc.subject Segmentation en_US
dc.subject Neural network en_US
dc.subject Pattern recognition en_US
dc.subject Filling-in en_US
dc.subject Visual cortex en_US
dc.title A Neural Network Architecture for Figure-ground Separation of Connected Scenic Figures en_US
dc.type Technical Report en_US
dc.rights.holder Boston University Trustees en_US

Files in this item

This item appears in the following Collection(s)

Show simple item record

Search OpenBU


Advanced Search

Browse

Deposit Materials