Deep discovery of facial motions using a shallow embedding layer

, , , , , & (2017) Deep discovery of facial motions using a shallow embedding layer. In Luo, J, Zeng, W, & Zhang, Y J (Eds.) Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP). Institute of Electrical and Electronics Engineers Inc., United States of America, pp. 1567-1571.

[img]
Preview
PDF (3MB)
__qut.edu.au_Documents_StaffHome_staffgroupW$_wu75_Documents_ePrints_115379.pdf.

View at publisher

Description

Unique encoding of the dynamics of facial actions has potential to provide a spontaneous facial expression recognition system. The most promising existing approaches rely on deep learning of facial actions. However, current approaches are often computationally intensive and require a great deal of memory/processing time, and typically the temporal aspect of facial actions are often ignored, despite the potential wealth of information available from the spatial dynamic movements and their temporal evolution over time from neutral state to apex state. To tackle aforementioned challenges, we propose a deep learning framework by using the 3D convolutional filters to extract spatio-temporal features, followed by the LSTM network which is able to integrate the dynamic evolution of short-duration of spatio-temporal features as an emotion progresses from the neutral state to the apex state. In order to reduce the redundancy of parameters and accelerate the learning of the recurrent neural network, we propose a shallow embedding layer to reduce the number of parameters in the LSTM by up to 98% without sacrificing recognition accuracy. As the fully connected layer approximately contains 95% of the parameters in the network, we decrease the number of parameters in this layer before passing features to the LSTM network, which significantly improves training speed and enables the possibility of deploying a state of the art deep network on real-time applications. We evaluate our proposed framework on the DISFA and UNBC-McMaster Shoulder pain datasets.

Impact and interest:

3 citations in Scopus
1 citations in Web of Science®
Search Google Scholar™

Citation counts are sourced monthly from Scopus and Web of Science® citation databases.

These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.

Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search.

Full-text downloads:

196 since deposited on 22 Jan 2018
25 in the past twelve months

Full-text downloads displays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.

ID Code: 115379
Item Type: Chapter in Book, Report or Conference volume (Conference contribution)
ORCID iD:
Baktashmotlagh, Mahsaorcid.org/0000-0002-0158-0321
Denman, Simonorcid.org/0000-0002-0983-5480
Sridharan, Sridhaorcid.org/0000-0003-4316-9001
Fookes, Clintonorcid.org/0000-0002-8515-6324
Measurements or Duration: 5 pages
Keywords: Facial Expression Recognition, Random Projection
DOI: 10.1109/ICIP.2017.8296545
ISBN: 978-1-5090-2175-8
Pure ID: 33169644
Divisions: Past > Institutes > Institute for Future Environments
Past > QUT Faculties & Divisions > Science & Engineering Faculty
Funding:
Copyright Owner: Consult author(s) regarding copyright matters
Copyright Statement: This work is covered by copyright. Unless the document is being made available under a Creative Commons Licence, you must assume that re-use is limited to personal use and that permission from the copyright owner must be obtained for all other uses. If the document is available under a Creative Commons License (or other specified license) then refer to the Licence for details of permitted re-use. It is a condition of access that users recognise and abide by the legal requirements associated with these rights. If you believe that this work infringes copyright please provide details by email to qut.copyright@qut.edu.au
Deposited On: 22 Jan 2018 05:34
Last Modified: 02 Mar 2024 23:19