IoT Network Intrusion Detection and Classification using Explainable (XAI) Machine Learning Algorithms
Main Article Content
Abstract
Artificial intelligence (AI) has grown significantly, but because it is a "black box," it is difficult to build enough trust. As a result, it is rarely used as a stand-alone device in high-risk Internet of Things applications including financial, medical, and critical industrial infrastructures. Explainable AI (XAI) has been developed to address this issue. But creating suitably quick and precise XAI remains difficult.
XAI can be used to provide explainability of machine learning techniques. As explainability has become a fundamental issue with the newest ML techniques, like ensembles or Deep Neural Networks, XAI has gained significant traction recently as accountability, transparency, trust, so model explainability is gaining priority. More precisely, [1] have carried out preliminary research into creating a network analysis for mischievous behaviours for explainable artificial intelligence frameworks. When machine learning models become more accurate, their complexity rises and interpretability falls as a result.
In this research Deep Neural Networks have been created to detect intrusion at network along with an explanation module for building trust in the building model through AI algorithms. Using the dataset on intrusion detection system, we successfully increased model transparency by applying the existing XAI algorithms, which provide explanations on individual predictions generated, such as SHAP (SHapley Additive exPlanations), LIME (Local Interpretable Model-Agnostic Explanations), Contrastive Explanations Method (CEM), Protidic, and Boolean Decision Rules via Column Generation (BRCG).
Article Details
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.