DOI QR코드

DOI QR Code

Adversarial Attacks and Defense Strategy in Deep Learning

  • Sarala D.V (Dept. of CS&E, Dayananda Sagar College of Engineering) ;
  • Thippeswamy Gangappa (Dept. of CS&E, BMSIT & M)
  • Received : 2024.01.05
  • Published : 2024.01.30

Abstract

With the rapid evolution of the Internet, the application of artificial intelligence fields is more and more extensive, and the era of AI has come. At the same time, adversarial attacks in the AI field are also frequent. Therefore, the research into adversarial attack security is extremely urgent. An increasing number of researchers are working in this field. We provide a comprehensive review of the theories and methods that enable researchers to enter the field of adversarial attack. This article is according to the "Why? → What? → How?" research line for elaboration. Firstly, we explain the significance of adversarial attack. Then, we introduce the concepts, types, and hazards of adversarial attack. Finally, we review the typical attack algorithms and defense techniques in each application area. Facing the increasingly complex neural network model, this paper focuses on the fields of image, text, and malicious code and focuses on the adversarial attack classifications and methods of these three data types, so that researchers can quickly find their own type of study. At the end of this review, we also raised some discussions and open issues and compared them with other similar reviews.

Keywords

References

  1. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. In: Proceedings of the 26th Conference on Neural Information Processing Systems; 2012 Dec 3-6; Lake Tahoe, NV, USA;2012. p. 1097-105.
  2. Tao Chen, Damian Borth, Trevor Darrell and Shih-Fu Chang: DeepSentiBank: Visual Sentiment Concept Classification with Deep Convolutional Neural Networks .arXiv:1410.8586v1
  3. Cisse M, Adi Y, Neverova N, Keshet J. Houdini: fooling deep structured prediction models. 2017. arXiv:1707.05373.
  4. Luo T, Cai T, Zhang M, Chen S, Wang L. Random mask: towards robust convolutional neural networks. In: ICLR 2019 Conference; 2019 Apr 30; New Orleans, LA, USA; 2019.
  5. Guy Katz, Clark Barrett, David Dill, Kyle Julian and Mykel Kochenderfer: Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks?. arXiv:1702.01135v2
  6. Jiawei Su*, Danilo Vasconcellos Vargas* and Kouichi Sakurai: One Pixel Attack for Fooling Deep Neural Networks. arXiv:1710.08864v7
  7. Huang S, Papernot N, Goodfellow I, Duan Y, Abbeel P. Adversarial attacks on neural network policies. 2017. arXiv:1702.02284.
  8. Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A. The limitations of deep learning in adversarial settings. In: Proceedings of the 2016 IEEE European Symposium on Security and Privacy; 2016 Mar 21-24; Saarbrucken, Germany; 2016. p. 372-87.
  9. Valentina Zantedeschi, Maria-Irina Nicolae, Ambrish Rawat: E_cient Defenses Against Adversarial Attacks. arXiv:1707.06728v2 
  10. Xiao C, Li B, Zhu JY, He W, Liu M, Song D. Generating adversarial examples with adversarial networks. 2018. arXiv:1801.02610.
  11. Carlini N. Is AmI (attacks meet interpretability) robust to adversarial examples? 2019. arXiv:1902.02322v1. 
  12. Liao F, Liang M, Dong Y, Pang T, Hu X, Zhu J. Defense against adversarial attacks using high-level representation guided denoiser. In: Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition; 2018 Jun 18-23; Salt Lake City, UT, USA; 2018. p. 1778-87.
  13. K. Hornik, M. Stinchcombe, et al. Multilayer feedforward networks are universal approximators. Neural networks, 2(5):359-366, 1989. https://doi.org/10.1016/0893-6080(89)90020-8