Fuzzy Detectors Against Adversarial Attacks

Li, Yi and Angelov, Plamen and Suri, Neeraj (2023) Fuzzy Detectors Against Adversarial Attacks. In: IEEE Symposium Series on Computational Intelligence :. UNSPECIFIED, Mexico. (In Press)

[thumbnail of SSCI]
Text (SSCI)
SSCI.pdf - Accepted Version
Available under License Creative Commons Attribution-NonCommercial.

Download (1MB)

Abstract

Deep learning-based methods have proved useful for adversarial attack detection. However, conventional detection algorithms exploit crisp set theory for classification boundary. Therefore, representing vague concepts is not available. Motivated by the recent success in fuzzy systems, we propose a fuzzy rule-based neural network to improve adversarial attack detection accuracy. The pre-trained ImageNet model is exploited to extract feature maps from clean and attacked images. Subsequently, the fuzzification network is used to obtain feature maps to produce fuzzy sets of difference degrees between clean and attacked images. The fuzzy rules control the intelligence that determines the detection boundaries. In the defuzzification layer, the fuzzy prediction from the intelligence is mapped back into the crisp model predictions for images. The loss between the prediction and label controls the rules to train the fuzzy detector. We show that the fuzzy rule-based network learns rich feature information than binary outputs and offers to obtain an overall performance gain. Our experiments, conducted over a wide range of images, show that the proposed method consistently performs better than conventional crisp set training in adversarial attack detection with various fuzzy system-based neural networks.

Item Type:
Contribution in Book/Report/Proceedings
Uncontrolled Keywords:
Research Output Funding/yes_externally_funded
Subjects:
?? yes - externally fundedno ??
ID Code:
204595
Deposited By:
Deposited On:
01 Nov 2023 09:15
Refereed?:
Yes
Published?:
In Press
Last Modified:
14 Apr 2024 23:39