Show simple item record

dc.contributor.authorBoardman, Ianen_US
dc.contributor.authorGrossberg, Stephenen_US
dc.contributor.authorMyers, Christopheren_US
dc.contributor.authorCohen, Michaelen_US
dc.date.accessioned2011-11-14T19:07:14Z
dc.date.available2011-11-14T19:07:14Z
dc.date.issued1998-06en_US
dc.identifier.urihttps://hdl.handle.net/2144/2338
dc.description.abstractHow does the brain extract invariant properties of variable-rate speech? A neural model, called PHONET, is developed to explain aspects of this process and, along the way, data about perceptual context effects. For example, in consonant vowel (CV) syllables such as /ba/ and /wa/, an increase in the duration of the vowel can cause a switch in the percept of the preceding consonant from /w/ to /b/ (Miller and Liberman, 1979). The frequency extent of the initial formant transitions of fixed duration also influences the percept (Schwab, Sawusch, and Nusbaum, 1981). PHONET quantitatively simulates over 98% of the variance in these data using a single set of parameters. The model also qualitatively explains many data about other perceptual context effects. In the model, C and V inputs are filtered by parallel auditory streams that respond preferentially to transient and sustained properties of the acoustic signal before being stored in parallel working memories. A lateral inhibitory network of onset- and rate-sensitive cells in the transient channel extracts measures of frequency transition rate and extent. Greater activation of the transient stream can increase the processing rate in the sustained stream via a cross-stream automatic gain control interaction. The stored activities across these gain-controlled working memories provide a basis for rate-invariant perception, since the transient-to-sustained gain control tends to preserve the relative activities across the transient and sustained working memories as speech rate changes. Comparisons with alternative models tested suggest the fit can not be attributed to the simplicity of the data. Brain analogs of model cell types are described.en_US
dc.description.sponsorshipAir Force Office of Scientific Research (F49620-92-J-0225); Defense Advanced Research Projects Agency and Office of Naval Research (N00014-95-1-0409); Office of Naval Research (N00014-91-J-4100, N00014-92-J-1309, N00014-94-1-0940, N00014-94-1-0597, N00014-95-1-0657)en_US
dc.language.isoen_USen_US
dc.publisherBoston University Center for Adaptive Systems and Department of Cognitive and Neural Systemsen_US
dc.relation.ispartofseriesBUCAS/CNS Technical Reports; BUCAS/CNS-TR-1998-004en_US
dc.rightsCopyright 1998 Boston University. Permission to copy without fee all or part of this material is granted provided that: 1. The copies are not made or distributed for direct commercial advantage; 2. the report title, author, document number, and release date appear, and notice is given that copying is by permission of BOSTON UNIVERSITY TRUSTEES. To copy otherwise, or to republish, requires a fee and / or special permission.en_US
dc.titleNeural Dynamics of Perceptual Order and Context Effects for Variable-Rate Speech Syllablesen_US
dc.typeTechnical Reporten_US
dc.rights.holderBoston University Trusteesen_US


This item appears in the following Collection(s)

Show simple item record