Boundary Contour System and Feature Contour System
When humans gaze upon a scene, our brains rapidly combine several different types of locally ambiguous visual information to generate a globally consistent and unambiguous representation of Form-And-Color-And-DEpth, or FACADE. This state of affairs raises the question: What new computational principles and mechanisms are needed to understand how multiple sources of visual information cooperate automatically to generate a percept of 3-dimensional form? This chapter reviews some modeling work aimed at developing such a general-purpose vision architecture. This architecture clarifies how scenic data about boundaries, textures, shading, depth, multiple spatial scales, and motion can be cooperatively synthesized in real-time into a coherent representation of 3-dimensional form. It embodies a new vision theory that attempts to clarify the functional organzation of the visual brain from the lateral geniculate nucleus (LGN) to the extrastriate cortical regions V4 and MT. Moreover, the same processes which are useful towards explaining how the visual cortex processes retinal signals are equally valuable for processing noisy multidimensional data from artificial sensors, such as synthetic aperture radar, laser radar, multispectral infrared, magnetic resonance, and high-altitude photographs. These processes generate 3-D boundary and surface representations of a scene.