A survey of safety and trustworthiness of deep neural networks:Verification, testing, adversarial attack and defence, and interpretability

Huang, X. and Kroening, D. and Ruan, W. and Sharp, J. and Sun, Y. and Thamo, E. and Wu, M. and Yi, X. (2020) A survey of safety and trustworthiness of deep neural networks:Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 37. ISSN 1574-0137

Full text not available from this repository.

Abstract

In the past few years, significant progress has been made on deep neural networks (DNNs) in achieving human-level performance on several long-standing tasks. With the broader deployment of DNNs on various applications, the concerns over their safety and trustworthiness have been raised in public, especially after the widely reported fatal incidents involving self-driving cars. Research to address these concerns is particularly active, with a significant number of papers released in the past few years. This survey paper conducts a review of the current research effort into making DNNs safe and trustworthy, by focusing on four aspects: verification, testing, adversarial attack and defence, and interpretability. In total, we survey 202 papers, most of which were published after 2017.

Item Type:
Journal Article
Journal or Publication Title:
Computer Science Review
Uncontrolled Keywords:
/dk/atira/pure/subjectarea/asjc/1700
Subjects:
?? DEEP NEURAL NETWORKSSAFETY TESTINGSURVEYSFATAL INCIDENTSHUMAN-LEVEL PERFORMANCEINTERPRETABILITYRESEARCH EFFORTSSTANDING TASKSNEURAL NETWORKSTHEORETICAL COMPUTER SCIENCECOMPUTER SCIENCE(ALL) ??
ID Code:
155575
Deposited By:
Deposited On:
01 Jun 2021 14:55
Refereed?:
Yes
Published?:
Published
Last Modified:
20 Sep 2023 01:37