Enhancing robustness in video recognition models : Sparse adversarial attacks and beyond

Mu, Ronghui and Marcolino, Leandro and Ni, Qiang and Ruan, Wenjie (2024) Enhancing robustness in video recognition models : Sparse adversarial attacks and beyond. Neural Networks, 171. pp. 127-143. ISSN 0893-6080

[thumbnail of NN_J_submission]
Text (NN_J_submission) - Accepted Version
Available under License Creative Commons Attribution.

Download (0B)
[thumbnail of NN_J_submission]
Text (NN_J_submission) - Accepted Version
Available under License Creative Commons Attribution.

Download (0B)
[thumbnail of NN_J_submission]
Text (NN_J_submission) - Accepted Version
Available under License Creative Commons Attribution.

Download (0B)
[thumbnail of NN_J_submission]
Text (NN_J_submission) - Accepted Version
Available under License Creative Commons Attribution.

Download (0B)
[thumbnail of NN_J_submission]
Text (NN_J_submission) - Accepted Version
Available under License Creative Commons Attribution.

Download (0B)
[thumbnail of NN_J_submission]
Text (NN_J_submission) - Accepted Version
Available under License Creative Commons Attribution.

Download (0B)
[thumbnail of NN_J_submission]
Text (NN_J_submission) - Accepted Version
Available under License Creative Commons Attribution.

Download (0B)
[thumbnail of NN_J_submission]
Text (NN_J_submission) - Accepted Version
Available under License Creative Commons Attribution.

Download (0B)
[thumbnail of NN_J_submission]
Text (NN_J_submission)
NN_J_submission.pdf - Accepted Version
Available under License Creative Commons Attribution.

Download (2MB)

Abstract

Recent years have witnessed increasing interest in adversarial attacks on images, while adversarial video attacks have seldom been explored. In this paper, we propose a sparse adversarial attack strategy on videos (DeepSAVA). Our model aims to add a small human-imperceptible perturbation to the key frame of the input video to fool the classifiers. To carry out an effective attack that mirrors real-world scenarios, our algorithm integrates spatial transformation perturbations into the frame. Instead of using the norm to gauge the disparity between the perturbed frame and the original frame, we employ the structural similarity index (SSIM), which has been established as a more suitable metric for quantifying image alterations resulting from spatial perturbations. We employ a unified optimisation framework to combine spatial transformation with additive perturbation, thereby attaining a more potent attack. We design an effective and novel optimisation scheme that alternatively utilises Bayesian Optimisation (BO) to identify the most critical frame in a video and stochastic gradient descent (SGD) based optimisation to produce both additive and spatial-transformed perturbations. Doing so enables DeepSAVA to perform a very sparse attack on videos for maintaining human imperceptibility while still achieving state-of-the-art performance in terms of both attack success rate and adversarial transferability. Furthermore, built upon the strong perturbations produced by DeepSAVA, we design a novel adversarial training framework to improve the robustness of video classification models. Our intensive experiments on various types of deep neural networks and video datasets confirm the superiority of DeepSAVA in terms of attacking performance and efficiency. When compared to the baseline techniques, DeepSAVA exhibits the highest level of performance in generating adversarial videos for three distinct video classifiers. Remarkably, it achieves an impressive fooling rate ranging from 99.5% to 100% for the I3D model, with the perturbation of just a single frame. Additionally, DeepSAVA demonstrates favorable transferability across various time series models. The proposed adversarial training strategy is also empirically demonstrated with better performance on training robust video classifiers compared with the state-of-the-art adversarial training with projected gradient descent (PGD) adversary.

Item Type:
Journal Article
Journal or Publication Title:
Neural Networks
Uncontrolled Keywords:
Research Output Funding/yes_externally_funded
Subjects:
?? deep learningadversarial robustnessaction recognitionadversarial trainingvideo classificationyes - externally fundedartificial intelligencecognitive neuroscience ??
ID Code:
211640
Deposited By:
Deposited On:
19 Dec 2023 09:50
Refereed?:
Yes
Published?:
Published
Last Modified:
01 May 2024 00:26