Explainable Deep Learning Based Adaptive Malware Detection Framework to Identify and Prevent Fraudulent Activities in Real-World Applications

Main Article Content

Brajesh Kumar Sharma, Chandrashekhar Goswami, Prasun Chakrabarti

Abstract

The rapid evolution of malware and fraudulent activities in digital environments has created unprecedented challenges for traditional cybersecurity approaches. While machine learning and deep learning models have demonstrated superior detection capabilities, their black-box nature significantly limits their adoption in security-critical environments where understanding decision rationale remains paramount. This research presents a novel explainable deep learning framework that combines adaptive malware detection with real-world fraud prevention capabilities. The proposed framework integrates Explainable Artificial Intelligence (XAI) techniques with advanced deep learning architectures to provide transparent, interpretable, and trustworthy malware detection mechanisms. Through comprehensive evaluation across multiple datasets, our framework achieved 97.98% accuracy in malware detection and 95.36% in fraud identification, while maintaining interpretability through SHAP and LIME explainability modules. The framework demonstrates significant improvements over traditional signature-based methods and existing machine learning approaches, particularly in detecting zero-day threats and adaptive malware variants. Key innovations include a multi-modal feature extraction pipeline, real-time adaptability mechanisms, and comprehensive explainability components that enable security analysts to understand and validate detection decisions. The research addresses critical gaps in current literature by providing both high-performance detection and meaningful explanations, making it suitable for deployment in enterprise environments where compliance and transparency requirements are essential.

Article Details

Section
Articles