Show simple item record

dc.contributor.authorGrossberg, Stephenen_US
dc.contributor.authorMyers, Christopheren_US
dc.date.accessioned2011-11-14T19:00:10Z
dc.date.available2011-11-14T19:00:10Z
dc.date.issued1999-01
dc.identifier.urihttps://hdl.handle.net/2144/2219
dc.description.abstractHow do listeners integrate temporally distributed phonemic information into coherent representations of syllables and words? During fluent speech perception, variations in the durations of speech sounds and silent pauses can produce different pereeived groupings. For exarnple, increasing the silence interval between the words "gray chip" may result in the percept "great chip", whereas increasing the duration of fricative noise in "chip" may alter the percept to "great ship" (Repp et al., 1978). The ARTWORD neural model quantitatively simulates such context-sensitive speech data. In AHTWORD, sequential activation and storage of phonemic items in working memory provides bottom-up input to unitized representations, or list chunks, that group together sequences of items of variable length. The list chunks compete with each other as they dynamically integrate this bottom-up information. The winning groupings feed back to provide top-down supportto their phonemic items. Feedback establishes a resonance which temporarily boosts the activation levels of selected items and chunks, thereby creating an emergent conscious percept. Because the resonance evolves more slowly than wotking memory activation, it can be influenced by information presented after relatively long intervening silence intervals. The same phonemic input can hereby yield different groupings depending on its arrival time. Processes of resonant transfer and competitive teaming help determine which groupings win the competition. Habituating levels of neurotransmitter along the pathways that sustain the resonant feedback lead to a resonant collapsee that permits the formation of subsequent. resonances.en_US
dc.description.sponsorshipAir Force Office of Scientific Research (F49620-92-J-0225); Defense Advanced Research projects Agency and Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-92-J-1309, NOOO14-95-1-0657)en_US
dc.language.isoen_US
dc.publisherBoston University Center for Adaptive Systems and Department of Cognitive and Neural Systemsen_US
dc.relation.ispartofseriesBU CAS/CNS Technical Reports;CAS/CNS-TR-1999-001
dc.rightsCopyright 1999 Boston University. Permission to copy without fee all or part of this material is granted provided that: 1. The copies are not made or distributed for direct commercial advantage; 2. the report title, author, document number, and release date appear, and notice is given that copying is by permission of BOSTON UNIVERSITY TRUSTEES. To copy otherwise, or to republish, requires a fee and / or special permission.en_US
dc.subjectSpeech perceptionen_US
dc.subjectWord recognitionen_US
dc.subjectConsciousnessen_US
dc.subjectAdaptive Resonance Theory (ART)en_US
dc.subjectContext effectsen_US
dc.subjectConsonant perceptionen_US
dc.subjectNeural networksen_US
dc.subjectSilence durationen_US
dc.subjectWorking memoryen_US
dc.subjectClusteringen_US
dc.titleThe Resonant Dynamics of Speech Perception: Interword Integration and Duration-Dependent Backward Effectsen_US
dc.typeTechnical Reporten_US
dc.rights.holderBoston University Trusteesen_US


This item appears in the following Collection(s)

Show simple item record