A laminar cortical model of conscious speech perception: phonemic restoration and speech category learning
MetadataShow full item record
How do the laminar circuits of neocortex learn categories that can support conscious percepts of speech and language? How do learned speech categories become selectively tuned to different temporal sequences of speech items that are stored in short-term working memory in real time? How does the brain use resonant feedback between working memories and learned categories to restore information that is occluded by noise using the context of a word or sentence? A model is developed to simulate how multiple laminar cortical processing stages interact to support a conscious speech percept. In particular, acoustic features are unitized into acoustic items. These items activate representations in an item-and-order, or competitive queuing, sequential short-term working memory. The sequence of stored working memory items interacts reciprocally with unitized representations of item sequences, also called list categories or chunks, in a multiple-scale categorization network, called a masking field, that is capable of weighing the evidence for groupings of variable-length sequences of items as they are stored in the working memory through time. List chunks represent the most predictive item groupings at any time. These bottom-up and top-down interactions between auditory features, working memory, and list chunks generate a resonant wave of activation whose attended features embody consciously heard percepts, notably the completed percepts that can form even when acoustic information may be missing or occluded by noise. This occurs in the auditory illusion known as phonemic restoration, even if the disambiguating speech context occurs after the occluding noise. This thesis provides the first explanation and simulation of how phonemic restoration arises in a laminar cortical hierarchy. It also develops a masking field that learns to respond robustly to input patterns from working memory as they unfold in time. Notably, for a given number of input items, all possible ordered sets of these items up to a fixed length can be learned. Both unsupervised and supervised learning simulations are provided. Supervised learning does not require as many list chunks to learn arbitrary sequences.
Thesis (Ph.D.)--Boston UniversityPLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at firstname.lastname@example.org. Thank you.