Show simple item record

dc.contributor.authorLorenz, Seanen_US
dc.date.accessioned2015-08-07T03:18:34Z
dc.date.available2015-08-07T03:18:34Z
dc.date.issued2013
dc.date.submitted2013
dc.identifier.other(ALMA)contemp
dc.identifier.urihttps://hdl.handle.net/2144/12809
dc.descriptionThesis (Ph.D.)--Boston Universityen_US
dc.description.abstractBrain-computer interface (BCI) technology has seen tremendous growth over the past several decades, with numerous groundbreaking research studies demonstrating technical viability (Sellers et al., 2010; Silvoni et al., 2011). Despite this progress, BCIs have remained primarily in controlled laboratory settings. This dissertation proffers a blueprint for translating research-grade BCI systems into real-world applications that are noninvasive and fully portable, and that employ intelligent user interfaces for communication. The proposed architecture is designed to be used by severely motor-impaired individuals, such as those with locked-in syndrome, while reducing the effort and cognitive load needed to communicate. Such a system requires the merging of two primary research fields: 1) electroencephalography (EEG)-based BCIs and 2) intelligent user interface design. The EEG-based BCI portion of this dissertation provides a history of the field, details of our software and hardware implementation, and results from an experimental study aimed at verifying the utility of a BCI based on the steady-state visual evoked potential (SSVEP), a robust brain response to visual stimulation at controlled frequencies. The visual stimulation, feature extraction, and classification algorithms for the BCI were specially designed to achieve successful real-time performance on a laptop computer. Also, the BCI was developed in Python, an open-source programming language that combines programming ease with effective handling of hardware and software requirements. The result of this work was The Unlock Project app software for BCI development. Using it, a four-choice SSVEP BCI setup was implemented and tested with five severely motor-impaired and fourteen control participants. The system showed a wide range of usability across participants, with classification rates ranging from 25-95%. The second portion of the dissertation discusses the viability of intelligent user interface design as a method for obtaining a more user-focused vocal output communication aid tailored to motor-impaired individuals. A proposed blueprint of this communication "app" was developed in this dissertation. It would make use of readily available laptop sensors to perform facial recognition, speech-to-text decoding, and geo-location. The ultimate goal is to couple sensor information with natural language processing to construct an intelligent user interface that shapes communication in a practical SSVEP-based BCI.en_US
dc.language.isoen_US
dc.publisherBoston Universityen_US
dc.rightsThis dissertation is being made available in OpenBU by permission of its author, and is available for research purposes only. All rights are reserved to the author.en_US
dc.titleDevelopment of a practical and mobile brain-computer communication device for profoundly paralyzed individualsen_US
dc.typeThesis/Dissertationen_US
etd.degree.nameDoctor of Philosophyen_US
etd.degree.leveldoctoralen_US
etd.degree.disciplineCognitive and Neural Systemsen_US
etd.degree.grantorBoston Universityen_US


This item appears in the following Collection(s)

Show simple item record