Abstract :
Heart disease continues to be one of the predominant contributors to morbidity and mortality on a global scale, underscoring the imperative for early and precise diagnosis to enhance patient outcomes. Machine Learning (ML) has emerged as a formidable instrument in the classification of cardiovascular diseases, utilizing intricate clinical datasets to discern patterns that conventional statistical methodologies may fail to detect. Nevertheless, notwithstanding their robust predictive capabilities, numerous machine learning models function as black-box systems, exhibiting a deficiency in transparency regarding their decision-making processes. The absence of interpretability presents a considerable challenge in clinical environments, where trust, accountability, and elucidation are of utmost importance for medical professionals. In order to tackle this issue, we propose a methodology for heart disease classification that is grounded in Explainable Artificial Intelligence (XAI). This approach incorporates interpretable machine learning models to improve diagnostic transparency and reliability. Our framework conducts an evaluation of various classifiers, including Support Vector Machine (SVM), Gradient Boosting (GB), Extreme Gradient Boosting (XGBoost), Multi-Layer Perceptron (MLP), and LightGBM. This assessment is based on essential performance metrics, namely accuracy, precision, recall, F1-score, and AUC-ROC. Furthermore, SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations) have been integrated to enhance the interpretability of the model. The experimental findings indicate that XGBoost surpasses alternative models, attaining the highest classification accuracy of 92% and an AUC-ROC score of 0.93, all while preserving interpretability. This study underscores the significance of incorporating Explainable Artificial Intelligence (XAI) techniques within medical AI applications. It advocates for the adoption of transparent, interpretable, and clinically dependable machine learning methodologies to enhance clinical decision-making and optimize patient outcomes.
Keywords :
Heart disease, LIME., Machine learning, SHAP, XAIReferences :
- A. Roth et al., “Global Burden of Cardiovascular Diseases and Risk Factors, 1990–2019,” J Am Coll Cardiol, vol. 76, no. 25, pp. 2982–3021, Dec. 2020, doi: 10.1016/j.jacc.2020.11.010.
- Avula, K. C. Wu, and R. T. Carrick, “Clinical Applications, Methodology, and Scientific Reporting of Electrocardiogram Deep-Learning Models,” JACC: Advances, vol. 2, no. 10, p. 100686, Dec. 2023, doi: 10.1016/j.jacadv.2023.100686.
- M. Ahsan and Z. Siddique, “Machine learning-based heart disease diagnosis: A systematic literature review,” Artif Intell Med, vol. 128, p. 102289, Jun. 2022, doi: 10.1016/j.artmed.2022.102289.
- Javaid, A. Haleem, R. Pratap Singh, R. Suman, and S. Rab, “Significance of machine learning in healthcare: Features, pillars and applications,” International Journal of Intelligent Networks, vol. 3, pp. 58–73, 2022, doi: 10.1016/j.ijin.2022.05.002.
- El-Sofany, B. Bouallegue, and Y. M. A. El-Latif, “A proposed technique for predicting heart disease using machine learning algorithms and an explainable AI method,” Sci Rep, vol. 14, no. 1, p. 23277, Oct. 2024, doi: 10.1038/s41598-024-74656-2.
- S Band et al., “Application of explainable artificial intelligence in medical health: A systematic review of interpretability methods,” Inform Med Unlocked, vol. 40, p. 101286, 2023, doi: 10.1016/j.imu.2023.101286.
- khan et al., “Drawbacks of Artificial Intelligence and Their Potential Solutions in the Healthcare Sector,” Biomedical Materials & Devices, vol. 1, no. 2, pp. 731–738, Sep. 2023, doi: 10.1007/s44174-023-00063-2.
- Saeed and C. Omlin, “Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities,” Knowl Based Syst, vol. 263, p. 110273, Mar. 2023, doi: 10.1016/j.knosys.2023.110273.
- Yang et al., “Survey on Explainable AI: From Approaches, Limitations and Applications Aspects,” Human-Centric Intelligent Systems, vol. 3, no. 3, pp. 161–188, Aug. 2023, doi: 10.1007/s44230-023-00038-y.
- A. Moreno-Sánchez, “Improvement of a prediction model for heart failure survival through explainable artificial intelligence,” Front Cardiovasc Med, vol. 10, Aug. 2023, doi: 10.3389/fcvm.2023.1219586.
- Sethi, S. Dharmavaram, and S. K. Somasundaram, “Explainable Artificial Intelligence (XAI) Approach to Heart Disease Prediction,” in 2024 3rd International Conference on Artificial Intelligence For Internet of Things (AIIoT), IEEE, May 2024, pp. 1–6. doi: 10.1109/AIIoT58432.2024.10574635.
- Salih et al., “Explainable Artificial Intelligence and Cardiac Imaging: Toward More Interpretable Models,” Circ Cardiovasc Imaging, vol. 16, no. 4, Apr. 2023, doi: 10.1161/CIRCIMAGING.122.014519.
- Tjoa and C. Guan, “A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI,” IEEE Trans Neural Netw Learn Syst, vol. 32, no. 11, pp. 4793–4813, Nov. 2021, doi: 10.1109/TNNLS.2020.3027314.
- C. Detrano et al., “International application of a new probability algorithm for the diagnosis of coronary artery disease.,” Am J Cardiol, vol. 64 5, pp. 304–10, 1989, [Online]. Available: https://api.semanticscholar.org/CorpusID:23545303
- D. Mienye and N. Jere, “Optimized Ensemble Learning Approach with Explainable AI for Improved Heart Disease Prediction,” Information, vol. 15, no. 7, p. 394, Jul. 2024, doi: 10.3390/info15070394.