Date on Master's Thesis/Doctoral Dissertation

8-2024

Document Type

Doctoral Dissertation

Degree Name

Ph. D.

Department

Computer Engineering and Computer Science

Degree Program

Computer Science and Engineering, PhD

Committee Chair

Kantardzic, Mehmed

Committee Member

Chang, Dar-Jen

Committee Member

Zhang, Harry

Committee Member

Elmaghraby, Adel

Committee Member

Ng, Chin K.

Author's Keywords

deep learning; explainability; interpretability; cancer detection; MRI; XAI; radiology

Abstract

Prostate cancer is a major public health concern, affecting millions of men worldwide. While early detection and treatment of prostate cancer is critical for improving patient outcomes, the detection of prostate lesions is even more important for timely intervention and management of the disease. Prostate lesions are abnormal growths or lumps within the prostate gland, which may or may not be cancerous. The timely detection and accurate diagnosis of prostate lesions is crucial for effective treatment and management of the disease. In recent years, deep learning models have shown promise in accurately detecting and characterizing prostate lesions using advanced imaging techniques such as MRI. However, the reliability and interpretability of these models is essential for gaining the trust of healthcare professionals and patients, and for ensuring that these models are used effectively and ethically. By improving the accuracy and transparency of lesion detection models, it may be possible to enhance the effectiveness of treatment for patients and improve their overall quality of life. Therefore, we propose a Prostate Lesion Explanation using Interpretation Models (PLE-IM) framework, for analyzing the prostate dataset using explanation methods more transparent and trustworthiness explanation. We proposed several performance measures for interpreting the feature representations of image data using Grad-CAM like evaluating the precision of explanation methods for image data.. These measures provide a quantitative and rigorous way to assess the accuracy and reliability of explanation methods for image data. Additionally, we developed a system that integrates several explanation methods for both prostate lesions and patient records. This system is designed to examine how the outcomes from image models and patient record models are related, using these explanation methods. Explanation methods provide essential insights into the key factors for detecting prostate lesions, and using multiple models together could further improve the transparency and reliability of the results. In line with this approach, we integrated Grad-CAM and Saliency Maps to enhance the accuracy and robustness of lesion localization in prostate MRI data. These models each capture distinct aspects of the data, complementing each other effectively. Similarly, utilizing both explanation methods and uncertainty-based prostate lesion detection explanations may enhance radiologists' ability to accurately detect and characterize prostate lesions, as these methods provide complementary information that can improve the reliability and transparency of the lesion detection algorithm. To this end, a pilot study was created with radiology expert and analyzed explanation methods for radiology end-use with them and the use of explanation methods can provide a clear and understandable explanation that can support radiologists in their prostate MRI treatment. Similarly, leveraging explanation methods alongside uncertainty-based approaches can significantly improve radiologists' accuracy in detecting and characterizing prostate lesions. These methods deliver complementary insights, thereby enhancing the transparency and reliability of the detection algorithms. To explore this further, a pilot study involving radiology experts assessed the application of these methods in clinical settings. The study aligns with broader findings in deep learning for prostate identification and classification, which underscore the potential for clinical integration but also highlight challenges due to the opacity of deep learning models. The results of the pilot study indicate that while explanation methods can enhance clinical task performance by up to 20\%, their usefulness varies, and some methods are perceived less favorably. Radiologists particularly value methods that are robust against noise, precise, and consistent. These findings emphasize the need to refine explanation methods to meet clinical expectations, focusing on clarity, accuracy, and reliability to foster deeper trust and broader acceptance of deep learning in medical diagnostics. Finally, a new framework has been developed to enhance the interpretability of ensemble methods in medical diagnostics, specifically focusing on prostate lesion classification through medical imaging. This innovative framework clarifies the decision-making processes of ensemble classifiers and reduces bias and noise, leading to more accurate and reliable diagnostic outcomes. By improving interpretability, the framework builds trust in these diagnostic tools, encouraging their ethical use in clinical environments. Emphasizing the integration of explainable AI techniques, the research highlights the necessity for these tools to be transparent and accountable, fostering trust in automated medical systems. This is crucial for the responsible adoption of technology in healthcare, ensuring that advanced diagnostic tools are not only accessible but also comprehensible to healthcare professionals. The framework supports better patient outcomes by providing clinicians with tools that are both advanced and straightforward, thereby enhancing the integrity of medical AI applications and promoting a patient-centered approach in the evolving landscape of medical diagnostics.

Share

COinS