Unlocking Machine Learning Model Decisions: A Comparative Analysis of LIME and SHAP for Enhanced Interpretability

Main Article Content

Deepak Mane Anand Magar Om Khode Sarvesh Koli Komal Bhat Prajwal Korade

Abstract

XAI is critical for establishing trust and enabling the appropriate development of machine learning models. By offering transparency into how these models make judgements, XAI enables researchers and users to uncover potential biases, admit limits, and eventually enhance the fairness and dependability of AI systems. In this paper, we demonstrates two techniques, LIME and SHAP, used to improve the interpretability of machine learning models. Assessing Explainable AI (XAI) approaches is critical in searching for transparent and interpretable artificial intelligence (AI) models. Explainable AI (XAI) approaches are designed to provide insight into how complex models make decisions. This paper thoroughly analyzes two prominent XAI methods: Shapley Additive explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME). This study aims to understand the decision made by a machine learning model and how the model came to that decision. We discuss the approaches and framework of both LIME and SHAP and assess their behavior in predicting the model's outcome.

Article Details

Section
Articles