Show simple item record

dc.contributor.authorMetaxas, Dimitrisen_US
dc.contributor.authorLiu, Boen_US
dc.contributor.authorYang, Feien_US
dc.contributor.authorYang, Pengen_US
dc.contributor.authorMichael, Nicholasen_US
dc.contributor.authorNeidle, Carolen_US
dc.coverage.spatialIstanbul, TURKEYen_US
dc.date.accessioned2018-11-02T14:04:17Z
dc.date.available2018-11-02T14:04:17Z
dc.date.issued2012-01-01
dc.identifierhttp://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000323927702079&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=6e74115fe3da270499c3d65c9b17d654
dc.identifier.citationDimitris Metaxas, Bo Liu, Fei Yang, Peng Yang, Nicholas Michael, Carol Neidle. 2012. "Recognition of Nonmanual Markers in American Sign Language (ASL) Using Non-Parametric Adaptive 2D-3D Face Tracking." LREC 2012 - EIGHTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION. 8th International Conference on Language Resources and Evaluation (LREC). Istanbul, TURKEY, 2012-05-21 - 2012-05-27.
dc.identifier.urihttps://hdl.handle.net/2144/31898
dc.description.abstractThis paper addresses the problem of automatically recognizing linguistically significant nonmanual expressions in American Sign Language from video. We develop a fully automatic system that is able to track facial expressions and head movements, and detect and recognize facial events continuously from video. The main contributions of the proposed framework are the following: (1) We have built a stochastic and adaptive ensemble of face trackers to address factors resulting in lost face track; (2) We combine 2D and 3D deformable face models to warp input frames, thus correcting for any variation in facial appearance resulting from changes in 3D head pose; (3) We use a combination of geometric features and texture features extracted from a canonical frontal representation. The proposed new framework makes it possible to detect grammatically significant nonmanual expressions from continuous signing and to differentiate successfully among linguistically significant expressions that involve subtle differences in appearance. We present results that are based on the use of a dataset containing 330 sentences from videos that were collected and linguistically annotated at Boston University.en_US
dc.format.extentp. 2414-2420en_US
dc.languageEnglish
dc.publisherEUROPEAN LANGUAGE RESOURCES ASSOC-ELRAen_US
dc.relation.ispartofLREC 2012 - EIGHTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION
dc.subjectLinguisticsen_US
dc.subjectSocial sciencesen_US
dc.subjectLanguage & linguisticsen_US
dc.subjectLinguistically based sign language recognitionen_US
dc.subjectModel-based nonmanual expression trackingen_US
dc.subjectLearning-based recognitionen_US
dc.titleRecognition of nonmanual markers in American Sign Language (ASL) using non-parametric adaptive 2D-3D face trackingen_US
dc.typeConference materialsen_US
pubs.elements-sourcemanual-entryen_US
pubs.notesEmbargo: Not knownen_US
pubs.organisational-groupBoston Universityen_US
pubs.organisational-groupBoston University, College of Arts & Sciencesen_US
pubs.organisational-groupBoston University, College of Arts & Sciences, Department of Linguisticsen_US
pubs.publication-statusPublisheden_US


This item appears in the following Collection(s)

Show simple item record