Tags

Type your tag names separated by a space and hit enter

Active and dynamic information fusion for facial expression understanding from image sequences.
IEEE Trans Pattern Anal Mach Intell. 2005 May; 27(5):699-714.IT

Abstract

This paper explores the use of multisensory information fusion technique with Dynamic Bayesian networks (DBNs) for modeling and understanding the temporal behaviors of facial expressions in image sequences. Our facial feature detection and tracking based on active IR illumination provides reliable visual information under variable lighting and head motion. Our approach to facial expression recognition lies in the proposed dynamic and probabilistic framework based on combining DBNs with Ekman's Facial Action Coding System (FACS) for systematically modeling the dynamic and stochastic behaviors of spontaneous facial expressions. The framework not only provides a coherent and unified hierarchical probabilistic framework to represent spatial and temporal information related to facial expressions, but also allows us to actively select the most informative visual cues from the available information sources to minimize the ambiguity in recognition. The recognition of facial expressions is accomplished by fusing not only from the current visual observations, but also from the previous visual evidences. Consequently, the recognition becomes more robust and accurate through explicitly modeling temporal behavior of facial expression. In this paper, we present the theoretical foundation underlying the proposed probabilistic and dynamic framework for facial expression modeling and understanding. Experimental results demonstrate that our approach can accurately and robustly recognize spontaneous facial expressions from an image sequence under different conditions.

Authors+Show Affiliations

Department of Electrical, Computer, and Systems Engineering, Rensselaer Polytechnic Institute, JEC 6003, 110 8th St., Troy, NY 12180, USA. zhangy5@rpi.eduNo affiliation info available

Pub Type(s)

Comparative Study
Evaluation Study
Journal Article
Research Support, U.S. Gov't, Non-P.H.S.
Validation Study

Language

eng

PubMed ID

15875792

Citation

Zhang, Yongmian, and Qiang Ji. "Active and Dynamic Information Fusion for Facial Expression Understanding From Image Sequences." IEEE Transactions On Pattern Analysis and Machine Intelligence, vol. 27, no. 5, 2005, pp. 699-714.
Zhang Y, Ji Q. Active and dynamic information fusion for facial expression understanding from image sequences. IEEE Trans Pattern Anal Mach Intell. 2005;27(5):699-714.
Zhang, Y., & Ji, Q. (2005). Active and dynamic information fusion for facial expression understanding from image sequences. IEEE Transactions On Pattern Analysis and Machine Intelligence, 27(5), 699-714.
Zhang Y, Ji Q. Active and Dynamic Information Fusion for Facial Expression Understanding From Image Sequences. IEEE Trans Pattern Anal Mach Intell. 2005;27(5):699-714. PubMed PMID: 15875792.
* Article titles in AMA citation format should be in sentence-case
TY - JOUR T1 - Active and dynamic information fusion for facial expression understanding from image sequences. AU - Zhang,Yongmian, AU - Ji,Qiang, PY - 2005/5/7/pubmed PY - 2005/6/1/medline PY - 2005/5/7/entrez SP - 699 EP - 714 JF - IEEE transactions on pattern analysis and machine intelligence JO - IEEE Trans Pattern Anal Mach Intell VL - 27 IS - 5 N2 - This paper explores the use of multisensory information fusion technique with Dynamic Bayesian networks (DBNs) for modeling and understanding the temporal behaviors of facial expressions in image sequences. Our facial feature detection and tracking based on active IR illumination provides reliable visual information under variable lighting and head motion. Our approach to facial expression recognition lies in the proposed dynamic and probabilistic framework based on combining DBNs with Ekman's Facial Action Coding System (FACS) for systematically modeling the dynamic and stochastic behaviors of spontaneous facial expressions. The framework not only provides a coherent and unified hierarchical probabilistic framework to represent spatial and temporal information related to facial expressions, but also allows us to actively select the most informative visual cues from the available information sources to minimize the ambiguity in recognition. The recognition of facial expressions is accomplished by fusing not only from the current visual observations, but also from the previous visual evidences. Consequently, the recognition becomes more robust and accurate through explicitly modeling temporal behavior of facial expression. In this paper, we present the theoretical foundation underlying the proposed probabilistic and dynamic framework for facial expression modeling and understanding. Experimental results demonstrate that our approach can accurately and robustly recognize spontaneous facial expressions from an image sequence under different conditions. SN - 0162-8828 UR - https://www.unboundmedicine.com/medline/citation/15875792/Active_and_dynamic_information_fusion_for_facial_expression_understanding_from_image_sequences_ L2 - https://doi.ieeecomputersociety.org/10.1109/TPAMI.2005.93 DB - PRIME DP - Unbound Medicine ER -