Enhancing explainability in brain tumor detection: A novel DeepEBTDNet model with LIME on MRI images

Naeem Ullah, Muhammad Hassan, Javed Ali Khan, Muhammad Shahid Anwar, Khursheed Aurangzeb

Research output: Contribution to journalArticlepeer-review

Abstract

Early detection of brain tumors is vital for improving patient survival rates, yet the manual analysis of the extensive 3D MRI images can be error-prone and time-consuming. This study introduces the Deep Explainable Brain Tumor Deep Network (DeepEBTDNet), a novel deep learning model for binary classification of brain MRIs as tumorous or normal. Employing sub-image dualistic histogram equalization (DSIHE) for enhanced image quality, DeepEBTDNet utilizes 12 convolutional layers with leaky ReLU (LReLU) activation for feature extraction, followed by a fully connected classification layer. Transparency and interpretability are emphasized through the application of the Local Interpretable Model-Agnostic Explanations (LIME) method to explain model predictions. Results demonstrate DeepEBTDNet's efficacy in brain tumor detection, even across datasets, achieving a validation accuracy of 98.96% and testing accuracy of 94.0%. This study underscores the importance of explainable AI in healthcare, facilitating precise diagnoses and transparent decision-making for early brain tumor identification and improved patient outcomes.

Original languageEnglish
Article numbere23012
Pages (from-to)1-15
Number of pages15
JournalInternational Journal of Imaging Systems and Technology (IMA)
Volume34
Issue number1
Early online date22 Dec 2023
DOIs
Publication statusE-pub ahead of print - 22 Dec 2023

Keywords

  • brain-tumor detection
  • deep learning
  • explainable AI
  • LIME
  • MRI

Fingerprint

Dive into the research topics of 'Enhancing explainability in brain tumor detection: A novel DeepEBTDNet model with LIME on MRI images'. Together they form a unique fingerprint.

Cite this