TY - JOUR
T1 - Enhancing explainability in brain tumor detection: A novel DeepEBTDNet model with LIME on MRI images
AU - Ullah, Naeem
AU - Hassan, Muhammad
AU - Khan, Javed Ali
AU - Anwar, Muhammad Shahid
AU - Aurangzeb, Khursheed
N1 - © 2023 Wiley Periodicals LLC.
PY - 2023/12/22
Y1 - 2023/12/22
N2 - Early detection of brain tumors is vital for improving patient survival rates, yet the manual analysis of the extensive 3D MRI images can be error-prone and time-consuming. This study introduces the Deep Explainable Brain Tumor Deep Network (DeepEBTDNet), a novel deep learning model for binary classification of brain MRIs as tumorous or normal. Employing sub-image dualistic histogram equalization (DSIHE) for enhanced image quality, DeepEBTDNet utilizes 12 convolutional layers with leaky ReLU (LReLU) activation for feature extraction, followed by a fully connected classification layer. Transparency and interpretability are emphasized through the application of the Local Interpretable Model-Agnostic Explanations (LIME) method to explain model predictions. Results demonstrate DeepEBTDNet's efficacy in brain tumor detection, even across datasets, achieving a validation accuracy of 98.96% and testing accuracy of 94.0%. This study underscores the importance of explainable AI in healthcare, facilitating precise diagnoses and transparent decision-making for early brain tumor identification and improved patient outcomes.
AB - Early detection of brain tumors is vital for improving patient survival rates, yet the manual analysis of the extensive 3D MRI images can be error-prone and time-consuming. This study introduces the Deep Explainable Brain Tumor Deep Network (DeepEBTDNet), a novel deep learning model for binary classification of brain MRIs as tumorous or normal. Employing sub-image dualistic histogram equalization (DSIHE) for enhanced image quality, DeepEBTDNet utilizes 12 convolutional layers with leaky ReLU (LReLU) activation for feature extraction, followed by a fully connected classification layer. Transparency and interpretability are emphasized through the application of the Local Interpretable Model-Agnostic Explanations (LIME) method to explain model predictions. Results demonstrate DeepEBTDNet's efficacy in brain tumor detection, even across datasets, achieving a validation accuracy of 98.96% and testing accuracy of 94.0%. This study underscores the importance of explainable AI in healthcare, facilitating precise diagnoses and transparent decision-making for early brain tumor identification and improved patient outcomes.
KW - brain-tumor detection
KW - deep learning
KW - explainable AI
KW - LIME
KW - MRI
UR - https://www.scopus.com/pages/publications/85180203912
U2 - 10.1002/ima.23012
DO - 10.1002/ima.23012
M3 - Article
AN - SCOPUS:85180203912
SN - 0899-9457
VL - 34
SP - 1
EP - 15
JO - International Journal of Imaging Systems and Technology (IMA)
JF - International Journal of Imaging Systems and Technology (IMA)
IS - 1
M1 - e23012
ER -