A Game-Theoretic Approach to Adversarial Machine Learning: Modeling the Attacker-Defender Dynamic

Main Article Content

Kristine Soberano

Abstract

The vulnerabilities of ML models make them vulnerable to adversarial attacks thus requiring defense strategies that adapt to these threats. The research uses game theory to analyze attacker-defender dynamics which leads to developing equilibrium-driven methods to enhance adversarial resistance. The proposed non-cooperative game analyzes strategic decision-making through both Nash equilibrium and Stackelberg equilibrium. The defined utility functions represent the strategic assessment of attack accomplishments versus defensive expenses. Defenses built upon Stackelberg equilibrium prove superior to Nash equilibrium defenses because they achieve 80-90% lower misclassification rates compared to Nash defenses which reach 60-75% misclassification reduction. Stackelberg strategies require higher computational resources yet security and efficiency need to find an equilibrium. The research shows that defensive strategies that involve proactive behavior enable defenders to predict attacker movements instead of waiting until after an attack occurs. The research supports AI security in cybersecurity domains financial fraud detection and autonomous systems functions through analysis of adaptive defense strategies against adversarial attacks. The advantage of ML security from game-theoretic approaches requires developers to consider hardware limitations to achieve practical applications. The research develops an equilibrium-based defense framework that solves existing adversarial ML model limitations which include static assumptions as well as incomplete equilibrium analysis. Future research needs to validate the model with actual adversary datasets advance it to multiple agent systems and include irrational attack patterns for enhancing security adaptability and robustness.

Article Details

Section
Articles