Page 376 - 《软件学报》2025年第7期
P. 376

高梦楠 等: 面向深度学习的后门攻击及防御研究综述                                                       3297


                  [9]  Li YD, Zhang SG, Wang WP, Song H. Backdoor attacks to deep learning models and countermeasures: A survey. IEEE Open Journal of
                      the Computer Society, 2023, 4: 134–146. [doi: 10.1109/OJCS.2023.3267221]
                 [10]  Goldblum M, Tsipras D, Xie CL, Chen XY, Schwarzschild A, Song D, Madry A, Li B, Goldstein T. Dataset security for machine
                      learning:  Data  poisoning,  backdoor  attacks,  and  defenses.  IEEE  Trans.  on  Pattern  Analysis  and  Machine  Intelligence,  2023,  45(2):
                      1563–1580. [doi: 10.1109/TPAMI.2022.3162397]
                 [11]  Gao YS, Doan BG, Zhang Z, Ma SQ, Zhang JL, Fu AM, Nepal S, Kim H. Backdoor attacks and countermeasures on deep learning: A
                      comprehensive review. arXiv:2007.10760, 2020.
                 [12]  Wu BY, Zhu ZH, Liu L, Liu QS, He ZF, Lyu SW. Attacks in adversarial machine learning: A systematic survey from the life-cycle
                      perspective. arXiv:2302.09457, 2024.
                 [13]  Omar M. Backdoor learning for NLP: Recent advances, challenges, and future research directions. arXiv:2302.06801, 2023.
                 [14]  Li YM, Jiang Y, Li ZF, Xia ST. Backdoor learning: A survey. IEEE Trans. on Neural Networks and Learning Systems, 2024, 35(1):
                      5–22. [doi: 10.1109/TNNLS.2022.3182979]
                 [15]  Huang SX, Zhang QX, Wang YJ, Zhang YY, Li YZ. Research progress of backdoor attacks in deep neural networks. Computer Science,
                      2023, 50(9): 52–61 (in Chinese with English abstract). [doi: 10.11896/jsjkx.230500235]
                 [16]  Li SF, Dong T, Zhao BZH, Xue MH, Du SG, Zhu HJ. Backdoors against natural language processing: A review. IEEE Security &
                      Privacy, 2022, 20(5): 50–59. [doi: 10.1109/MSEC.2022.3181001]
                 [17]  Du W, Liu GS. A survey of backdoor attack in deep learning. Journal of Cyber Security, 2022, 7(3): 1–16 (in Chinese with English
                      abstract). [doi: 10.19363/J.cnki.cn10-1380/tn.2022.05.01]
                 [18]  Zheng  MY,  Lin  Z,  Liu  ZX,  Fu  P,  Wang  WP.  Survey  of  textual  backdoor  attack  and  defense.  Journal  of  Computer  Research  and
                      Development, 2024, 61(1): 221–242 (in Chinese with English abstract). [doi: 10.7544/issn1000-1239.202220340]
                 [19]  Chen MX, Zhang ZY, Ji SL, Wei GY, Shao J. Survey of research progress on adversarial examples in images. Computer Science, 2022,
                      49(2): 92–106 (in Chinese with English abstract). [doi: 10.11896/jsjkx.210800087]
                 [20]  Pan XD, Zhang M, Sheng BN, Zhu JM, Yang M. Hidden trigger backdoor attack on NLP models via linguistic style manipulation. In:
                      Proc. of the 31st USENIX Security Symp. Boston: USENIX Association, 2022. 3611–3628.
                 [21]  Gu TY, Dolan-Gavitt B, Garg S. BadNets: Identifying vulnerabilities in the machine learning model supply chain. arXiv:1708.06733,
                      2019.
                 [22]  Wang ZT, Zhai J, Ma SQ. BppAttack: Stealthy and efficient Trojan attacks against deep neural networks via image quantization and
                      contrastive adversarial learning. In: Proc. of the 2022 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. New Orleans:
                      IEEE, 2022. 15054–15063. [doi: 10.1109/CVPR52688.2022.01465]
                 [23]  Liu YF, Ma XJ, Bailey J, Lu F. Reflection backdoor: A natural backdoor attack on deep neural networks. In: Proc. of the 16th European
                      Conf. on Computer Vision. Glasgow: Springer, 2020. 182–199. [doi: 10.1007/978-3-030-58607-2_11]
                 [24]  Li SF, Xue MH, Zhao B, Zhu HJ, Zhang XP. Invisible backdoor attacks on deep neural networks via steganography and regularization.
                      IEEE Trans. on Dependable and Secure Computing, 2021, 18(5): 2088–2105. [doi: 10.1109/TDSC.2020.3021407]
                 [25]  Sun WL, Jiang XY, Dou SG, Li DS, Miao DQ, Deng C, Zhao CR. Invisible backdoor attack with dynamic triggers against person re-
                      IDentification. IEEE Trans. on Information Forensics and Security, 2024, 19: 307–319. [doi: 10.1109/TIFS.2023.3322659]
                 [26]  Nguyen A, Tran A. WaNet-imperceptible warping-based backdoor attack. arXiv:2102.10369, 2021.
                 [27]  Lin JY, Xu L, Liu YQ, Zhang XY. Composite backdoor attack for deep neural network by mixing existing benign features. In: Proc. of
                      the 2020 ACM SIGSAC Conf. on Computer and Communications Security. ACM, 2020. 113–131. [doi: 10.1145/3372297.3423362]
                 [28]  Sarkar E, Benkraouda H, Krishnan G, Gamil H, Maniatakos M. FaceHack: Attacking facial recognition systems using malicious facial
                      characteristics. IEEE Trans. on Biometrics, Behavior, and Identity Science, 2022, 4(3): 361–372. [doi: 10.1109/TBIOM.2021.3132132]
                 [29]  Zhong HT, Liao C, Squicciarini AC, Zhu SC, Miller D. Backdoor embedding in convolutional neural network models via invisible
                      perturbation.  In:  Proc.  of  the  10th  ACM  Conf.  on  Data  and  Application  Security  and  Privacy.  New  Orleans:  ACM,  2020.  97–108.
                      [doi: 10.1145/3374664.3375751]
                 [30]  He  Y,  Shen  ZL,  Xia  C,  Hua  JY,  Tong  W,  Zhong  S.  SGBA:  A  stealthy  scapegoat  backdoor  attack  against  deep  neural  networks.
                      Computers & Security, 2024, 136: 103523. [doi: 10.1016/j.cose.2023.103523]
                 [31]  Jia JY, Liu YP, Gong NZ. BadEncoder: Backdoor attacks to pre-trained encoders in self-supervised learning. In: Proc. of the 2022 IEEE
                      Symp. on Security and Privacy. San Francisco: IEEE, 2022. 2043–2059. [doi: 10.1109/SP46214.2022.9833644]
                 [32]  Carlini N, Terzis A. Poisoning and backdooring contrastive learning. arXiv:2106.09667, 2022.
                 [33]  Zeng Y, Park W, Mao ZM, Jia RX. Rethinking the backdoor attacks’ triggers: A frequency perspective. In: Proc. of the 2021 IEEE/CVF
                      Int’l Conf. on Computer Vision (ICCV). Montreal: IEEE, 2021. 16453–16461. [doi: 10.1109/ICCV48922.2021.01616]
                 [34]  Xu  ZQJ,  Zhang  YY,  Xiao  YY.  Training  behavior  of  deep  neural  network  in  frequency  domain.  In:  26th  Int’l  Conf.  on  Neural
   371   372   373   374   375   376   377   378   379   380   381