Deep motion analysis for epileptic seizure classification
|
PDF
(360kB)
EMBC18_0584_FI.pdf. |
Description
Visual motion clues such as facial expression and pose are natural semiology features which an epileptologist observes to identify epileptic seizures. However, these cues have not been effectively exploited for automatic detection due to the diverse variations in seizure appearance within and between patients. Here we present a multi-modal analysis approach to quantitatively classify patients with mesial temporal lobe (MTLE) and extra-temporal lobe (ETLE) epilepsy, relying on the fusion of facial expressions and pose dynamics. We propose a new deep learning approach that leverages recent advances in Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks to automatically extract spatiotemporal features from facial and pose semiology using recorded videos. A video dataset from 12 patients with MTLE and 6 patients with ETLEin an Australian hospital has been collected for experiments. Our experiments show that facial semiology and body movements can be effectively recognized and tracked, and that they provide useful evidence to identify the type of epilepsy. A multi-fold cross-validation of the fusion model exhibited an average test accuracy of 92.10%, while a leave-one-subject-out cross-validation scheme, which is the first in the literature, achieves an accuracy of 58.49%. The proposed approach is capable of modelling semiology features which effectively discriminate between seizures arising from temporal and extra-temporal brain areas. Our approach can be used as a virtual assistant, which will save time, improve patient safety and provide objective clinical analysis to assist with clinical decision making.
Impact and interest:
Citation counts are sourced monthly from Scopus and Web of Science® citation databases.
These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.
Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search.
Full-text downloads:
Full-text downloads displays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.
ID Code: | 126907 | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Item Type: | Chapter in Book, Report or Conference volume (Conference contribution) | ||||||||||
ORCID iD: |
|
||||||||||
Measurements or Duration: | 4 pages | ||||||||||
DOI: | 10.1109/EMBC.2018.8513031 | ||||||||||
ISBN: | 978-1-5386-3647-3 | ||||||||||
Pure ID: | 33313787 | ||||||||||
Divisions: | Past > Institutes > Institute for Future Environments Past > QUT Faculties & Divisions > Science & Engineering Faculty |
||||||||||
Funding: | |||||||||||
Copyright Owner: | Consult author(s) regarding copyright matters | ||||||||||
Copyright Statement: | This work is covered by copyright. Unless the document is being made available under a Creative Commons Licence, you must assume that re-use is limited to personal use and that permission from the copyright owner must be obtained for all other uses. If the document is available under a Creative Commons License (or other specified license) then refer to the Licence for details of permitted re-use. It is a condition of access that users recognise and abide by the legal requirements associated with these rights. If you believe that this work infringes copyright please provide details by email to qut.copyright@qut.edu.au | ||||||||||
Deposited On: | 28 Feb 2019 04:55 | ||||||||||
Last Modified: | 06 Apr 2024 21:03 |
Export: EndNote | Dublin Core | BibTeX
Repository Staff Only: item control page