Assessment of the Robustness of Deep Neural Networks (DNNs)

Mu, Ronghui and Soriano Marcolino, Leandro and Ni, Qiang and Ruan, Wenjie (2023) Assessment of the Robustness of Deep Neural Networks (DNNs). PhD thesis, Lancaster University.

[thumbnail of 2023ronghuimuphd]
Text (2023ronghuimuphd)
2023ronghuimuphd.pdf - Published Version

Download (7MB)

Abstract

In the past decade, Deep Neural Networks (DNNs) have demonstrated outstanding performance in various domains. However, recently, some researchers have shown that DNNs are surprisingly vulnerable to adversarial attacks. For instance, adding a small, human-imperceptible perturbation to an input image can fool DNNs, enabling the model to make an arbitrarily wrong prediction with high confidence. This raises serious concerns about the readiness of deep learning models, particularly in safety-critical applications, such as surveillance systems, autonomous vehicles, and medical applications. Hence, it is vital to investigate the performance of DNNs in an adversarial environment. In this thesis, we study the robustness of DNNs in three aspects: adversarial attacks, adversarial defence, and robustness verification. First, we address the robustness problems on video models and propose DeepSAVA, a sparse adversarial attack on video models. It aims to add human-imperceptible perturbations on the crucial frame of the input video to fool classifiers. Additionally, we construct a novel adversarial training framework based on the perturbations generated by DeepSAVA to increase the robustness of video classification models. The results show that DeepSAVA runs a relatively sparse attack on video models, yet achieves state-of-the-art performance in terms of attack success rate and adversarial transferability. Next, we address the challenges of robustness verification in two deep learning models: 3D point cloud models and cooperative multi-agent reinforcement learning models (c-MARLs). Robustness verification aims to provide solid proof of robustness within an input space to any adversarial attacks. To verify the robustness of 3D point cloud models, we propose an efficient verification framework, 3DVerifier, which tackles the challenges of cross-non-linearity operations in multiplication layers and the high computational complexity of high-dimensional point cloud inputs. We use a linear relaxation function to bound the multiplication layer and combine forward and backward propagation to compute the certified bounds of the outputs of the point cloud models. For certifying the c-MARLs, we propose a novel certification method, which is the first work to leverage a scalable approach for c-MARLs to determine actions with guaranteed certified bounds. The challenges of c-MARL certification are accumulated uncertainty as the number of agents increases and the potential lack of impact when changing the action of a single agent into a global team reward. These challenges prevent me from using existing algorithms directly. We employ the false discovery rate (FDR) controlling procedure, considering the importance of each agent to certify per-state robustness and propose a tree-search-based algorithm to find a lower bound of the global reward under the minimal certified perturbation. The experimental results show that the obtained certification bounds are much tighter than those of state-of-the-art RL certification solutions. In summary, this thesis focuses on assessing the robustness of deep learning models that are widely applied in safety-critical systems but rarely studied by the community. This thesis not only investigates the motivation and challenges of assessing the robustness of these deep learning models but also proposes novel and effective approaches to tackle these challenges.

Item Type:
Thesis (PhD)
Uncontrolled Keywords:
Research Output Funding/yes_internally_funded
Subjects:
?? yes - internally funded ??
ID Code:
207066
Deposited By:
Deposited On:
16 Oct 2023 11:35
Refereed?:
No
Published?:
Published
Last Modified:
14 Feb 2024 00:26