Explainable AI for Adversarial Machine Learning: Enhancing Transparency and Trust in Cyber Security

Main Article Content

Araddhana Arvind Deshmukh, Sheela Hundekari, Yashwant Dongre, Kirti Wanjale, Vikas Balasaheb Maral, Deepali Bhaturkar

Abstract

Explainable artificial intelligence (XAI) is essential for improving machine learning models' interpretability, transparency, and reliability—especially in challenging and important fields like cybersecurity. These abstract addresses approaches, structures, and evaluation criteria for putting XAI techniques into practice and comparing them, as well as offering a thorough understanding of all the important components of XAI in the context of adversarial machine learning. Model-agnosticism, global/local explanation, adversarial assault resistance, interpretability, computing efficiency, and scalability are all covered in the discussion. Notably, the suggested SHIME approach shows excellent performance in a number of dimensions, making it a promising solution. The need of carefully weighing XAI solutions based on particular application requirements is emphasized in the abstract's conclusion, opening the door for future developments in the field to handle changing difficulties at the nexus of cybersecurity and artificial intelligence.

Article Details

Section
Articles
Author Biography

Araddhana Arvind Deshmukh, Sheela Hundekari, Yashwant Dongre, Kirti Wanjale, Vikas Balasaheb Maral, Deepali Bhaturkar

Dr. Araddhana Arvind Deshmukh 

2Sheela Hundekari

3Yashwant Dongre

4Dr. Kirti Wanjale

5Vikas Balasaheb Maral

6Deepali Bhaturkar

Professor, Department of Computer Science & Information Technology (Cyber Security), Symbiosis Skill and Professional University, Kiwale, Pune, aadeshmukhskn@gmail.com

2Associate Professor, MITCOM, MCA department, MIT ADT University , Loni Kalbhor, Pune, Maharashtra, India. Email: sheela.hundekari@mituniversity.edu.in

3Assistant professor (Computer), VIIT college, Kapil Nagar, Kondhwa Budruk Pune, Maharashtra, India. Email: yashwant.dongre@viit.ac.in

4Associate professor, Department of Computer Engineering, Vishwakarma Institute of Information Technology, Pune, Maharashtra, India. Email: kirti.wanjale@viit.ac.in

5Assistant Professor, Vishwakarma Institute of Information Technology, Pune, Maharashtra, India. Email: vikas.maral@viit.ac.in

6Assistant Professor (IT), International Institute of Information Technology, I2IT, Pune, Maharashtra, India. Email: deepali.bhaturkar11@gmail.com

References

Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., et al. (2020) 'Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI', Information Fusion, 58, pp. 82–115.

Luong, N. C., Hoang, D. T., Gong, S., et al. (2019) 'Applications of deep reinforcement learning in communications and networking: A survey', IEEE Communications Surveys & Tutorials, 21(4), pp. 3133–3174.

Perarasi, T., Vidhya, S., Leeban Moses, M., and Ramya, P. (2020) 'Malicious vehicles identifying and trust management algorithm for enhance the security in 5G-VANET', in Proceedings of the 2020 Second International Conference on Inventive Research in Computing Applications (ICIRCA), Coimbatore, India, July 2020.

Guo, W. (2020) 'Explainable artificial intelligence for 6G: Improving trust between human and machine', IEEE Communications Magazine, 58(6), pp. 39–45.

Rudin, C. (2019) 'Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead', Nature Machine Intelligence, 1(5), pp. 206–215.

Castelvecchi, D. (2016) 'Can we open the black box of AI?', Nature, 538(7623), p. 20.

Carvalho, D. V., Pereira, E. M., and Cardoso, J. S. (2019) 'Machine learning interpretability: A survey on methods and metrics', Electronics, 8(8), p. 832.

Sultana, N., Chilamkurti, N., Peng, W., and Alhadad, R. (2019) 'Survey on SDN based network intrusion detection system using machine learning approaches', Peer-to-Peer Networking and Applications, 12(2), pp. 493–501.

Kumar, G., Kumar, K., and Sachdeva, M. (2010) 'The use of artificial intelligence based techniques for intrusion detection: A review', Artificial Intelligence Review, 34(4), pp. 369–387.

Wang, M., et al. (2020) 'An explainable machine learning framework for intrusion detection systems', IEEE Access, 8, pp. 73127–73141.

Wang, S., et al. (2016) 'Trafficav: An effective and explainable detection of mobile malware behavior using network traffic', in 2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS). IEEE, pp. 1–6.

Wang, Z., et al. (2020) 'Smoothed geometry for robust attribution', in Advances in Neural Information Processing Systems, 33, pp. 13623–13634.

Xu, F., et al. (2019) 'Explainable AI: A brief survey on history, research areas, approaches and challenges', in CCF International Conference on Natural Language Processing and Chinese Computing. Springer, pp. 563–574.

Zeng, X., Martinez, T. (2001) 'Distribution-balanced stratified cross-validation for accuracy estimation', Journal of Experimental & Theoretical Artificial Intelligence, 12. https://doi.org/10.1080/095281300146272.

Zeng, Z., et al. (2015) 'A novel feature selection method considering feature interaction', Pattern Recognition, 48(8), pp. 2656–2666.

Zhao, X., et al. (2021) 'Exploiting explanations for model inversion attacks', in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 682–692.

Zolanvari, M., et al. (2019) 'Machine learning-based network vulnerability analysis of industrial Internet of Things', IEEE Internet of Things Journal, 6(4), pp. 6822–6834.

Zolanvari, M., et al. (2021) 'TRUST XAI: Model-agnostic explanations for AI With a Case Study on IIoT Security', IEEE Internet of Things Journal