Semantic segmentation of hands in multimodal images: A region new-based CNN approach
|
Accepted Version
(PDF 5MB)
59288642. Available under License Creative Commons Attribution Non-commercial 4.0. |
Description
Segmentation of body parts is a critical but challenging stage in the medical image processing pipeline due to the anatomical complexity. Recent advances in deep learning have successfully dealt with this complexity in visible images. However, the efficacy of these segmentation techniques for other modalities of interests such as X-ray images is not known. We propose a unified semantic segmentation approach of body parts for both X-ray and visible images which can be concurrently applied to both modalities. Unifying the two modalities in a single model not only reduces the number of parameters, but also improves the accuracy by enabling end-to-end training and inference. Quantitative results are validated on two clinical applications: (1) a static analysis of hand segmentation in visible and X-ray images; and (2) a dynamic analysis which quantifies and classifies epileptic seizures from clinical manifestations of visible hand and finger motions. The proposed model is a potential stepping stone towards developing more robust automated systems that support the assessment of medical conditions based on multimodal information.
Impact and interest:
Citation counts are sourced monthly from Scopus and Web of Science® citation databases.
These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.
Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search.
Full-text downloads:
Full-text downloads displays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.
ID Code: | 201675 | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Item Type: | Chapter in Book, Report or Conference volume (Conference contribution) | ||||||||||
Series Name: | Proceedings - International Symposium on Biomedical Imaging | ||||||||||
ORCID iD: |
|
||||||||||
Measurements or Duration: | 5 pages | ||||||||||
Additional URLs: | |||||||||||
Keywords: | Deep learning, Epilepsy, Quantitative motion analysis, X-ray images | ||||||||||
DOI: | 10.1109/ISBI.2019.8759215 | ||||||||||
ISBN: | 978-1-5386-3642-8 | ||||||||||
Pure ID: | 59288642 | ||||||||||
Divisions: | Past > Institutes > Institute for Future Environments Past > QUT Faculties & Divisions > Science & Engineering Faculty |
||||||||||
Funding Information: | The research presented in this paper was supported by an Australian Research Council (ARC) grant DP170100632. | ||||||||||
Funding: | |||||||||||
Copyright Owner: | 2019 IEEE | ||||||||||
Copyright Statement: | © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | ||||||||||
Deposited On: | 06 Jul 2020 23:32 | ||||||||||
Last Modified: | 19 Apr 2024 17:19 |
Export: EndNote | Dublin Core | BibTeX
Repository Staff Only: item control page