Geometry vs appearance for discriminating between posed and spontaneous emotions

Zhang, Ligang, Tjondronegoro, Dian W., & Chandran, Vinod (2011) Geometry vs appearance for discriminating between posed and spontaneous emotions. Neural Information Processing : Lecture Notes in Computer Science, 7064/2011, pp. 431-440.

View at publisher


Spontaneous facial expressions differ from posed ones in appearance, timing and accompanying head movements. Still images cannot provide timing or head movement information directly. However, indirectly the distances between key points on a face extracted from a still image using active shape models can capture some movement and pose changes. This information is superposed on information about non-rigid facial movement that is also part of the expression. Does geometric information improve the discrimination between spontaneous and posed facial expressions arising from discrete emotions? We investigate the performance of a machine vision system for discrimination between posed and spontaneous versions of six basic emotions that uses SIFT appearance based features and FAP geometric features. Experimental results on the NVIE database demonstrate that fusion of geometric information leads only to marginal improvement over appearance features. Using fusion features, surprise is the easiest emotion (83.4% accuracy) to be distinguished, while disgust is the most difficult (76.1%). Our results find different important facial regions between discriminating posed versus spontaneous version of one emotion and classifying the same emotion versus other emotions. The distribution of the selected SIFT features shows that mouth is more important for sadness, while nose is more important for surprise, however, both the nose and mouth are important for disgust, fear, and happiness. Eyebrows, eyes, nose and mouth are important for anger.

Impact and interest:

1 citations in Scopus
Search Google Scholar™
9 citations in Web of Science®

Citation counts are sourced monthly from Scopus and Web of Science® citation databases.

These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.

Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search.

Full-text downloads:

247 since deposited on 22 Aug 2011
25 in the past twelve months

Full-text downloads displays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.

ID Code: 44130
Item Type: Journal Article
Refereed: Yes
Keywords: Facial expression, posed, spontaneous, SIFT, FAP
DOI: 10.1007/978-3-642-24965-5_49
Subjects: Australian and New Zealand Standard Research Classification > INFORMATION AND COMPUTING SCIENCES (080000) > ARTIFICIAL INTELLIGENCE AND IMAGE PROCESSING (080100)
Divisions: Past > QUT Faculties & Divisions > Faculty of Built Environment and Engineering
Past > QUT Faculties & Divisions > Faculty of Science and Technology
Copyright Owner: Copyright 2011 Springer-Verlag GmbH Berlin Heidelberg
Copyright Statement: The definitive version is available from
Deposited On: 22 Aug 2011 22:03
Last Modified: 24 Jul 2014 05:59

Export: EndNote | Dublin Core | BibTeX

Repository Staff Only: item control page