Deep facial analysis: A new phase I epilepsy evaluation using computer vision

, , , , , & Dionisio, Sasha (2018) Deep facial analysis: A new phase I epilepsy evaluation using computer vision. Epilepsy and Behavior, 82, pp. 17-24.

[img]
Preview
PDF (736kB)
Deep facial analysis_A new phase I epilepsy evaluation using computer vision_camera_ready.pdf.
Available under License Creative Commons Attribution Non-commercial No Derivatives 2.5.

View at publisher

Description

Semiology observation and characterization play a major role in the presurgical evaluation of epilepsy. However, the interpretation of patient movements has subjective and intrinsic challenges. In this paper, we develop approaches to attempt to automatically extract and classify semiological patterns from facial expressions. We address limitations of existing computer-based analytical approaches of epilepsy monitoring, where facial movements have largely been ignored. This is an area that has seen limited advances in the literature. Inspired by recent advances in deep learning, we propose two deep learning models, landmark-based and region-based, to quantitatively identify changes in facial semiology in patients with mesial temporal lobe epilepsy (MTLE) from spontaneous expressions during phase I monitoring. A dataset has been collected from the Mater Advanced Epilepsy Unit (Brisbane, Australia) and is used to evaluate our proposed approach. Our experiments show that a landmark-based approach achieves promising results in analyzing facial semiology, where movements can be effectively marked and tracked when there is a frontal face on visualization. However, the region-based counterpart with spatiotemporal features achieves more accurate results when confronted with extreme head positions. A multifold cross-validation of the region-based approach exhibited an average test accuracy of 95.19% and an average AUC of 0.98 of the ROC curve. Conversely, a leave-one-subject-out cross-validation scheme for the same approach reveals a reduction in accuracy for the model as it is affected by data limitations and achieves an average test accuracy of 50.85%. Overall, the proposed deep learning models have shown promise in quantifying ictal facial movements in patients with MTLE. In turn, this may serve to enhance the automated presurgical epilepsy evaluation by allowing for standardization, mitigating bias, and assessing key features. The computer-aided diagnosis may help to support clinical decision-making and prevent erroneous localization and surgery.

Impact and interest:

43 citations in Scopus
24 citations in Web of Science®
Search Google Scholar™

Citation counts are sourced monthly from Scopus and Web of Science® citation databases.

These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.

Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search.

Full-text downloads:

348 since deposited on 28 Feb 2019
64 in the past twelve months

Full-text downloads displays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.

ID Code: 126906
Item Type: Contribution to Journal (Journal Article)
Refereed: Yes
ORCID iD:
Ahmedt Aristizabal, Davidorcid.org/0000-0003-1598-4930
Fookes, Clintonorcid.org/0000-0002-8515-6324
Nguyen Thanh, Kienorcid.org/0000-0002-3466-9218
Denman, Simonorcid.org/0000-0002-0983-5480
Sridharan, Sridhaorcid.org/0000-0003-4316-9001
Measurements or Duration: 8 pages
Keywords: Convolutional neural network (CNN), Deep learning, Epilepsy evaluation, Facial semiology, Long short-term memory (LSTM), Neuroethology
DOI: 10.1016/j.yebeh.2018.02.010
ISSN: 1525-5069
Pure ID: 33395560
Divisions: Past > Institutes > Institute for Future Environments
Past > QUT Faculties & Divisions > Science & Engineering Faculty
Copyright Owner: Consult author(s) regarding copyright matters
Copyright Statement: This work is covered by copyright. Unless the document is being made available under a Creative Commons Licence, you must assume that re-use is limited to personal use and that permission from the copyright owner must be obtained for all other uses. If the document is available under a Creative Commons License (or other specified license) then refer to the Licence for details of permitted re-use. It is a condition of access that users recognise and abide by the legal requirements associated with these rights. If you believe that this work infringes copyright please provide details by email to qut.copyright@qut.edu.au
Deposited On: 28 Feb 2019 04:32
Last Modified: 17 Jul 2024 11:44