Quantifying safety risks of deep neural networks

Xu, Peipei and Ruan, Wenjie and Huang, Xiaowei (2023) Quantifying safety risks of deep neural networks. Complex & Intelligent Systems, 9 (4). pp. 3801-3818. ISSN 2199-4536

Full text not available from this repository.


Safety concerns on the deep neural networks (DNNs) have been raised when they are applied to critical sectors. In this paper, we define safety risks by requesting the alignment of network’s decision with human perception. To enable a general methodology for quantifying safety risks, we define a generic safety property and instantiate it to express various safety risks. For the quantification of risks, we take the maximum radius of safe norm balls, in which no safety risk exists. The computation of the maximum safe radius is reduced to the computation of their respective Lipschitz metrics—the quantities to be computed. In addition to the known adversarial example, reachability example, and invariant example, in this paper, we identify a new class of risk—uncertainty example—on which humans can tell easily, but the network is unsure. We develop an algorithm, inspired by derivative-free optimization techniques and accelerated by tensor-based parallelization on GPUs, to support an efficient computation of the metrics. We perform evaluations on several benchmark neural networks, including ACSC-Xu, MNIST, CIFAR-10, and ImageNet networks. The experiments show that our method can achieve competitive performance on safety quantification in terms of the tightness and the efficiency of computation. Importantly, as a generic approach, our method can work with a broad class of safety risks and without restrictions on the structure of neural networks.

Item Type:
Journal Article
Journal or Publication Title:
Complex & Intelligent Systems
?? adversarial exampleslipschitz metricsneural networksrobustnesssafetyuncertainty ??
ID Code:
Deposited By:
Deposited On:
04 Aug 2022 15:30
Last Modified:
15 Jul 2024 22:52