Semantic segmentation of hands in multimodal images: A region new-based CNN approach

, , , , Dionisio, Sasha, & (2019) Semantic segmentation of hands in multimodal images: A region new-based CNN approach. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). Institute of Electrical and Electronics Engineers Inc., United States of America, pp. 819-823.

View at publisher

Description

Segmentation of body parts is a critical but challenging stage in the medical image processing pipeline due to the anatomical complexity. Recent advances in deep learning have successfully dealt with this complexity in visible images. However, the efficacy of these segmentation techniques for other modalities of interests such as X-ray images is not known. We propose a unified semantic segmentation approach of body parts for both X-ray and visible images which can be concurrently applied to both modalities. Unifying the two modalities in a single model not only reduces the number of parameters, but also improves the accuracy by enabling end-to-end training and inference. Quantitative results are validated on two clinical applications: (1) a static analysis of hand segmentation in visible and X-ray images; and (2) a dynamic analysis which quantifies and classifies epileptic seizures from clinical manifestations of visible hand and finger motions. The proposed model is a potential stepping stone towards developing more robust automated systems that support the assessment of medical conditions based on multimodal information.

Impact and interest:

4 citations in Scopus
Search Google Scholar™

Citation counts are sourced monthly from Scopus and Web of Science® citation databases.

These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.

Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search.

Full-text downloads:

73 since deposited on 06 Jul 2020
33 in the past twelve months

Full-text downloads displays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.

ID Code: 201675
Item Type: Chapter in Book, Report or Conference volume (Conference contribution)
Series Name: Proceedings - International Symposium on Biomedical Imaging
ORCID iD:
Pemasiri, Akilaorcid.org/0000-0003-0443-9347
Ahmedt-Aristizabal, Davidorcid.org/0000-0003-1598-4930
Nguyen, Kienorcid.org/0000-0002-3466-9218
Sridharan, Sridhaorcid.org/0000-0003-4316-9001
Fookes, Clintonorcid.org/0000-0002-8515-6324
Measurements or Duration: 5 pages
Additional URLs:
Keywords: Deep learning, Epilepsy, Quantitative motion analysis, X-ray images
DOI: 10.1109/ISBI.2019.8759215
ISBN: 978-1-5386-3642-8
Pure ID: 59288642
Divisions: Past > Institutes > Institute for Future Environments
Past > QUT Faculties & Divisions > Science & Engineering Faculty
Funding Information: The research presented in this paper was supported by an Australian Research Council (ARC) grant DP170100632.
Funding:
Copyright Owner: 2019 IEEE
Copyright Statement: © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Deposited On: 06 Jul 2020 23:32
Last Modified: 19 Apr 2024 17:19