Frequency Domain Backdoor Attacks for Visual Object Tracking
Main Article Content
Abstract
Visual object tracking(VOT)is a key topic in computer vision tasks. It serves as an essential component of various advanced problems in the field, such as motion analysis, event detection, and activity understanding. VOT finds extensive applications, including human-computer interaction in video, video surveillance, and autonomous driving. Due to the rapid development of deep neural networks(DNNs), VOT has achieved unprecedented progress. However, the lack of interpretability in DNNs has introduced certain security risks, notably backdoor attacks. A neural network backdoor attack involves an attacker injecting hidden backdoors into the network, making the compromised model behave normally with regular inputs but produce predetermined outputs when specific conditions set by the attacker are met. Existing triggers for VOT backdoor attacks are poorly concealed. We leverage the sensitivity of DNNs to small perturbations to generate pixel-level indistinguishable perturbations in the frequency domain, thus proposing an invisible backdoor attack. This method ensures both effectiveness and concealment. Additionally, we employ a differential evolution(DE) algorithm to optimize trigger generation, thereby reducing the attacker's required capabilities. We have validated the effectiveness of the attack across various datasets and models.
Article Details
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.