Interpretability of machine learning models for medical image analysis

(2024) Interpretability of machine learning models for medical image analysis. PhD thesis, Queensland University of Technology.

[img]
Preview
PDF (86MB)
Salamata Konate.
Available under License Creative Commons Attribution Non-commercial No Derivatives 4.0.

Description

In the realm of medical machine learning (ML), the application of visual explanation methods to interpret model behavior poses a significant challenge. As AI progresses, researchers strive to tackle real-world issues. This project explores the intersection of AI and medical imaging analysis, aiming to improve disease diagnosis accuracy through ML models. However, the "black box" nature of these models raises concerns about reliability and bias. By investigating saliency methods, particularly in scenarios involving biases in images, this research aims to shed light on model behavior and limitations in medical ML.

Impact and interest:

Search Google Scholar™

Citation counts are sourced monthly from Scopus and Web of Science® citation databases.

These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.

Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search.

Full-text downloads:

22 since deposited on 30 May 2024
22 in the past twelve months

Full-text downloads displays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.

ID Code: 248812
Item Type: QUT Thesis (PhD)
Supervisor: Bradley, Andrew & Fookes, Clinton
Keywords: Artidicial intelligence (AI), machine learning (ML), medical imaging, saliency maps, explanability of machine learning models
DOI: 10.5204/thesis.eprints.248812
Pure ID: 169528999
Divisions: Current > QUT Faculties and Divisions > Faculty of Engineering
Current > Schools > School of Electrical Engineering & Robotics
Institution: Queensland University of Technology
Deposited On: 30 May 2024 06:01
Last Modified: 30 May 2024 06:01