3D face tracking and multi-scale, spatio-temporal analysis of linguistically significant facial expressions and head positions in ASL
Files
Published version
Date
2014-01-01
DOI
Authors
Liu, Bo
Liu, Jingjing
Yu, Xiang
Metaxas, Dimitris
Neidle, Carol
Version
OA Version
Citation
Bo Liu, Jingjing Liu, Xiang Yu, Dimitris Metaxas, Carol Neidle. 2014. "3D Face Tracking and Multi-scale, Spatio-temporal Analysis of Linguistically Significant Facial Expressions and Head Positions in ASL." LREC 2014 - NINTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION. 9th International Conference on Language Resources and Evaluation (LREC). Reykjavik, ICELAND, 2014-05-26 - 2014-05-31.
Abstract
Essential grammatical information is conveyed in signed languages by clusters of events involving facial expressions and movements of the head and upper body. This poses a significant challenge for computer-based sign language recognition. Here, we present new methods for the recognition of nonmanual grammatical markers in American Sign Language (ASL) based on: (1) new 3D tracking methods for the estimation of 3D head pose and facial expressions to determine the relevant low-level features; (2) methods for higher-level analysis of component events (raised/lowered eyebrows, periodic head nods and head shakes) used in grammatical markings—with differentiation of temporal phases (onset, core, offset, where appropriate), analysis of their characteristic properties, and extraction of corresponding features; (3) a 2-level learning framework to combine lowand high-level features of differing spatio-temporal scales. This new approach achieves significantly better tracking and recognition results than our previous methods.