Show simple item record

dc.contributor.authorMetaxas, Dimitris N.en_US
dc.contributor.authorDilsizian, Marken_US
dc.contributor.authorNeidle, Carolen_US
dc.contributor.editorCalzolari, Nicolettaen_US
dc.contributor.editorChoukri, Khaliden_US
dc.contributor.editorCieri, Christopheren_US
dc.contributor.editorDeclerck, Thierryen_US
dc.contributor.editorGoggi, Saraen_US
dc.contributor.editorHasida, Kôitien_US
dc.contributor.editorIsahara, Hitoshien_US
dc.contributor.editorMaegaard, Benteen_US
dc.contributor.editorMariani, Josephen_US
dc.contributor.editorMazo, Hélèneen_US
dc.contributor.editorMoreno, Asunciónen_US
dc.contributor.editorOdijk, Janen_US
dc.contributor.editorPiperidis, Steliosen_US
dc.contributor.editorTokunaga, Takenobuen_US
dc.coverage.spatialMiyazaki, Japanen_US
dc.date.accessioned2018-07-25T15:24:10Z
dc.date.available2018-07-25T15:24:10Z
dc.date.issued2018
dc.identifierhttp://www.lrec-conf.org/lrec2018
dc.identifier.citationDimitris N Metaxas, Mark Dilsizian, Carol Neidle. 2018. "Linguistically-driven Framework for Computationally Efficient and Scalable Sign Recognition.." Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)
dc.identifier.isbn979-10-95546-00-9
dc.identifier.urihttps://hdl.handle.net/2144/30048
dc.description.abstractWe introduce a new general framework for sign recognition from monocular video using limited quantities of annotated data. The novelty of the hybrid framework we describe here is that we exploit state-of-the art learning methods while also incorporating features based on what we know about the linguistic composition of lexical signs. In particular, we analyze hand shape, orientation, location, and motion trajectories, and then use CRFs to combine this linguistically significant information for purposes of sign recognition. Our robust modeling and recognition of these sub-components of sign production allow an efficient parameterization of the sign recognition problem as compared with purely data-driven methods. This parameterization enables a scalable and extendable time-series learning approach that advances the state of the art in sign recognition, as shown by the results reported here for recognition of isolated, citation-form, lexical signs from American Sign Language (ASL).en_US
dc.publisherEuropean Language Resources Association (ELRA)en_US
dc.relation.ispartofProceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)
dc.subjectData formaten_US
dc.subjectCognitive scienceen_US
dc.subjectArtificial intelligence and image processingen_US
dc.subjectArtificial intelligence & image processingen_US
dc.subjectAmerican Sign Language (ASL)en_US
dc.subjectSign recognitionen_US
dc.subjectModel-based machine learningen_US
dc.subjectComputer visionen_US
dc.subjectComputer scienceen_US
dc.titleLinguistically-driven framework for computationally efficient and scalable sign recognitionen_US
dc.typeConference materialsen_US
pubs.elements-sourcedblpen_US
pubs.notesEmbargo: No embargoen_US
pubs.organisational-groupBoston Universityen_US
pubs.organisational-groupBoston University, College of Arts & Sciencesen_US
pubs.organisational-groupBoston University, College of Arts & Sciences, Department of Romance Studiesen_US


This item appears in the following Collection(s)

Show simple item record