Trustworthy deep learning framework for the detection of abnormalities in X-ray shoulder images

, Salhi, Asma, Fadhel, Mohammed A., , Hollman, Freek, Italia, Kristine, Pareyon, Roberto, Albahri, A. S., , Santamaría, Jose, , , Abbosh, Amin, & (2024) Trustworthy deep learning framework for the detection of abnormalities in X-ray shoulder images. PLoS ONE, 19(3), Article number: e0299545.

Open access copy at publisher website

Description

Musculoskeletal conditions affect an estimated 1.7 billion people worldwide, causing intense pain and disability. These conditions lead to 30 million emergency room visits yearly, and the numbers are only increasing. However, diagnosing musculoskeletal issues can be challenging, especially in emergencies where quick decisions are necessary. Deep learning (DL) has shown promise in various medical applications. However, previous methods had poor performance and a lack of transparency in detecting shoulder abnormalities on X-ray images due to a lack of training data and better representation of features. This often resulted in overfitting, poor generalisation, and potential bias in decision-making. To address these issues, a new trustworthy DL framework has been proposed to detect shoulder abnormalities (such as fractures, deformities, and arthritis) using X-ray images. The framework consists of two parts: same-domain transfer learning (TL) to mitigate imageNet mismatch and feature fusion to reduce error rates and improve trust in the final result. Same-domain TL involves training pre-trained models on a large number of labelled X-ray images from various body parts and fine-tuning them on the target dataset of shoulder X-ray images. Feature fusion combines the extracted features with seven DL models to train several ML classifiers. The proposed framework achieved an excellent accuracy rate of 99.2%, F1Score of 99.2%, and Cohen's kappa of 98.5%. Furthermore, the accuracy of the results was validated using three visualisation tools, including gradient-based class activation heat map (Grad CAM), activation visualisation, and locally interpretable model-independent explanations (LIME). The proposed framework outperformed previous DL methods and three orthopaedic surgeons invited to classify the test set, who obtained an average accuracy of 79.1%. The proposed framework has proven effective and robust, improving generalisation and increasing trust in the final results.

Impact and interest:

2 citations in Scopus
Search Google Scholar™

Citation counts are sourced monthly from Scopus and Web of Science® citation databases.

These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.

Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search.

ID Code: 247545
Item Type: Contribution to Journal (Journal Article)
Refereed: Yes
ORCID iD:
Alzubaidi, Laithorcid.org/0000-0002-7296-5413
Ouyang, Chunorcid.org/0000-0001-7098-5480
Gu, Yuantongorcid.org/0000-0002-2770-5014
Measurements or Duration: 39 pages
DOI: 10.1371/journal.pone.0299545
ISSN: 1932-6203
Pure ID: 165629901
Divisions: Current > Research Centres > Centre for Data Science
Current > QUT Faculties and Divisions > Faculty of Science
Current > Schools > School of Information Systems
Current > QUT Faculties and Divisions > Faculty of Engineering
Current > Schools > School of Mechanical, Medical & Process Engineering
Funding Information: The authors would like to acknowledge the support received through the following funding schemes of Australian Government: Australian Research Council (ARC) Industrial Transformation Training Centre (ITTC) for Joint Biomechanics under grant (IC190100020). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Funding:
Copyright Owner: 2024 The Authors
Copyright Statement: This work is covered by copyright. Unless the document is being made available under a Creative Commons Licence, you must assume that re-use is limited to personal use and that permission from the copyright owner must be obtained for all other uses. If the document is available under a Creative Commons License (or other specified license) then refer to the Licence for details of permitted re-use. It is a condition of access that users recognise and abide by the legal requirements associated with these rights. If you believe that this work infringes copyright please provide details by email to qut.copyright@qut.edu.au
Deposited On: 26 Mar 2024 04:21
Last Modified: 06 Aug 2024 18:07