GradMDM: Adversarial Attack on Dynamic Networks

Rahmani, Hossein (2023) GradMDM: Adversarial Attack on Dynamic Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence. ISSN 0162-8828 (In Press)

Text (Layer_wise_Attack)
Layer_wise_Attack.pdf - Accepted Version
Restricted to Repository staff only until 1 January 2050.
Available under License Creative Commons Attribution.

Download (881kB)


Dynamic neural networks can greatly reduce computation redundancy without compromising accuracy by adapting their structures based on the input. In this paper, we explore the robustness of dynamic neural networks against \textit{energy-oriented attacks} targeted at reducing their efficiency. Specifically, we attack dynamic models with our novel algorithm GradMDM. GradMDM is a technique that adjusts the direction and the magnitude of the gradients to effectively find a small perturbation for each input, that will activate more computational units of dynamic models during inference. We evaluate GradMDM on multiple datasets and dynamic models, where it outperforms previous energy-oriented attack techniques, significantly increasing computation complexity while reducing the perceptibility of the perturbations.

Item Type:
Journal Article
Journal or Publication Title:
IEEE Transactions on Pattern Analysis and Machine Intelligence
Additional Information:
©2023 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Uncontrolled Keywords:
ID Code:
Deposited By:
Deposited On:
27 Mar 2023 10:10
In Press
Last Modified:
11 Sep 2023 23:48