Static Defense against Adversarial Attacks on Fetal Anatomical Plane Classification

Main Article Content

Harismithaa L R, G Sudha Sadasivam, Achari Magesh, Lavanya S R

Abstract

Deep learning is revolutionizing healthcare and medical imaging. The increased application of deep learning in healthcare raises concerns about adversarial attacks, especially in medical ultrasound imaging. Developing Deep learning models with enhanced privacy against adversarial attacks while maintaining an optimal robustness-accuracy trade-off is crucial. Training algorithms with adversarial examples, input denoising, and comparing original and altered data are some defense methods against adversarial attacks. Among the multiple categories of attacks, this paper suggests static defense approaches against gradient-based, optimization and decision boundary-based attacks such as PGD, Carlini-Wagner and DeepFool. Madry’s static defense is taken as the base approach. This paper proposes DAMRoC defense framework along with a hybrid approach, which has been implemented and evaluated against various gradient-based attacks, Carlini-Wagner and DeepFool attacks along with the Madry defense. The proposed models were able to achieve good accuracy and robustness against most attacks of the same category while being weak to another category of attack. This prompts the exploration of dynamic defense frameworks against broad adversarial attacks.

Article Details

Section
Articles