Timely diagnosis of brain tumors using MRI and its potential impact on patient survival are critical issues addressed in this study. Traditional DL models often lack transparency, leading to skepticism among medical experts owing to their "black box" nature. This study addresses this gap by presenting an innovative approach for brain tumor detection. It utilizes a customized Convolutional Neural Network (CNN) model empowered by three advanced explainable artificial intelligence (XAI) techniques: Shapley Additive Explana-tions (SHAP), Local Interpretable Model-agnostic Explanations (LIME), and Gradient-weighted Class Activation Mapping (Grad-CAM). The study utilized the BR35H dataset, which includes 3060 brain MRI images encompassing both tumorous and non-tumorous cases. The proposed model achieved a remarkable training accuracy of 100 % and validation accuracy of 98.67 %. Precision, recall, and F1 score metrics demonstrated exceptional performance at 98.50 %, confirming the accuracy of the model in tumor detection. Detailed result analysis, including a confusion matrix, comparison with existing models, and generalizability tests on other datasets, establishes the superiority of the proposed approach and sets a new benchmark for accuracy. By integrating a customized CNN model with XAI techniques, this research enhances trust in AI-driven medical diagnostics and offers a promising pathway for early tumor detection and potentially life-saving interventions.
First, this study introduced a customized CNN architecture specifically designed for brain tumor detection.
Second, integrating SHAP, LIME, and Grad-CAM as XAI techniques enhances the model's clarity in the decision-making process.
Moreover, the study validates the model across other datasets, demonstrating its robust performance and generalizability.
This work opens the door to higher neuroimaging diagnostic accuracy by combining AI research and real-life clinical usage.
See how this article has been cited at scite.ai
scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.