Show simple item record

dc.contributor.authorMetaxas, Dimitrisen_US
dc.contributor.authorDilsizian, Marken_US
dc.contributor.authorNeidle, Carolen_US
dc.coverage.spatialMiyazaki, Japanen_US
dc.date2018-02-03
dc.date.accessioned2018-07-25T15:31:29Z
dc.date.available2018-07-25T15:31:29Z
dc.date.issued2018-05-12
dc.identifierhttp://lrec-conf.org/workshops/lrec2018/W1/pdf/book_of_proceedings.pdf
dc.identifier.citationDimitris Metaxas, Mark Dilsizian, Carol Neidle. 2018. "Scalable ASL Sign Recognition using Model-based Machine Learning and Linguistically Annotated Corpora." Language Resources and Evaluation. 8th Workshop on the Representation & Processing of Sign Languages: Involving the Language Community, Language Resources and Evaluation Conference 2018. Miyazaki, Japan, 2018-05-12 - 2018-05-12
dc.identifier.issn0010-4817
dc.identifier.urihttps://hdl.handle.net/2144/30049
dc.description.abstractWe report on the high success rates of our new, scalable, computational approach for sign recognition from monocular video, exploiting linguistically annotated ASL datasets with multiple signers. We recognize signs using a hybrid framework combining state-of-the-art learning methods with features based on what is known about the linguistic composition of lexical signs. We model and recognize the sub-components of sign production, with attention to hand shape, orientation, location, motion trajectories, plus non-manual features, and we combine these within a CRF framework. The effect is to make the sign recognition problem robust, scalable, and feasible with relatively smaller datasets than are required for purely data-driven methods. From a 350-sign vocabulary of isolated, citation-form lexical signs from the American Sign Language Lexicon Video Dataset (ASLLVD), including both 1- and 2-handed signs, we achieve a top-1 accuracy of 93.3% and a top-5 accuracy of 97.9%. The high probability with which we can produce 5 sign candidates that contain the correct result opens the door to potential applications, as it is reasonable to provide a sign lookup functionality that offers the user 5 possible signs, in decreasing order of likelihood, with the user then asked to select the desired sign.en_US
dc.format.extent127 - 132 (6)en_US
dc.publisherEuropean Language Resources Association (ELRA)en_US
dc.relation.ispartofLanguage Resources and Evaluation
dc.rightsAttribution-NonCommercial 4.0 Internationalen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc/4.0/
dc.subjectData formaten_US
dc.subjectCognitive scienceen_US
dc.subjectArtificial intelligence and image processingen_US
dc.subjectArtificial intelligence & image processingen_US
dc.subjectSign recognitionen_US
dc.subjectModel-based machine learningen_US
dc.subjectAmerican Sign Language (ASL)en_US
dc.subjectComputer visionen_US
dc.subjectLinguistically annotated corporaen_US
dc.subjectComputer scienceen_US
dc.titleScalable ASL sign recognition using model-based machine learning and linguistically annotated corporaen_US
dc.typeConference materialsen_US
pubs.elements-sourcemanual-entryen_US
pubs.notesEmbargo: No embargoen_US
pubs.organisational-groupBoston Universityen_US
pubs.organisational-groupBoston University, College of Arts & Sciencesen_US
pubs.organisational-groupBoston University, College of Arts & Sciences, Department of Romance Studiesen_US
pubs.publication-statusPublisheden_US


This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial 4.0 International
Except where otherwise noted, this item's license is described as Attribution-NonCommercial 4.0 International