Page 377 - 《软件学报》2025年第7期
P. 377

3298                                                       软件学报  2025  年第  36  卷第  7  期


                      Information Processing. Sydney: Springer, 2019. 264–274. [doi: 10.1007/978-3-030-36708-4_22]
                 [35]  Xu ZQJ, Zhang YY, Luo T, Xiao YY, Ma Z. Frequency principle: Fourier analysis sheds light on deep neural networks. arXiv:1901.
                      06523, 2024.
                 [36]  Gao  YD,  Chen  HL,  Sun  P,  Li  JJ,  Zhang  AQ,  Wang  ZB,  Liu  WF.  A  dual  stealthy  backdoor:  From  both  spatial  and  frequency
                      perspectives. In: Proc. of the 38th AAAI Conf. on Artificial Intelligence. Vancouver: AAAI Press, 2024. 1851–1859. [doi: 10.1609/aaai.
                      v38i3.27954]
                 [37]  Xia J, Yue ZH, Zhou YB, Ling ZW, Wei X, Chen MS. WaveAttack: Asymmetric frequency obfuscation-based backdoor attacks against
                      deep neural networks. arXiv:2310.11595, 2023.
                 [38]  Feng Y, Ma BT, Zhang J, Zhao SS, Xia Y, Tao DC. FIBA: Frequency-Injection based backdoor attack in medical image analysis. In:
                      Proc. of the 2022 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. New Orleans: IEEE, 2022. 20844–20853. [doi: 10.
                      1109/CVPR52688.2022.02021]
                 [39]  Chen XY, Salem A, Chen DF, Backes M, Ma SQ, Shen QN, Wu ZH, Zhang Y. BadNL: Backdoor attacks against NLP models with
                      semantic-preserving improvements. In: Proc. of the 37th Annual Computer Security Applications Conf. ACM, 2021. 554–569. [doi: 10.
                      1145/3485832.3485837]
                 [40]  Qi FC, Yao Y, Xu S, Liu ZY, Sun MS. Turn the combination lock: Learnable textual backdoor attacks via word substitution. In: Proc. of
                      the  59th  Annual  Meeting  of  the  Association  for  Computational  Linguistics  and  the  11th  Int’l  Joint  Conf.  on  Natural  Language
                      Processing, Vol. 1 (Long Papers). ACL, 2021. 4873–4883. [doi: 10.18653/v1/2021.acl-long.377]
                 [41]  Parhankangas A, Renko M. Linguistic style and crowdfunding success among social and commercial entrepreneurs. Journal of Business
                      Venturing, 2017, 32(2): 215–236. [doi: 10.1016/j.jbusvent.2016.11.001]
                 [42]  Dai JZ, Chen CS, Li YF. A backdoor attack against LSTM-based text classification systems. IEEE Access, 2019, 7: 138872–138878.
                      [doi: 10.1109/ACCESS.2019.2941376]
                 [43]  Yang WK, Lin YK, Li P, Zhou J, Sun X. Rethinking stealthiness of backdoor attack against NLP models. In: Proc. of the 59th Annual
                      Meeting of the Association for Computational Linguistics and the 11th Int’l Joint Conf. on Natural Language Processing, Vol. 1 (Long
                      Papers). ACL, 2021. 5543–5557. [doi: 10.18653/v1/2021.acl-long.431]
                 [44]  Qi FC, Li MK, Chen YY, Zhang ZY, Liu ZY, Wang YS, Sun MS. Hidden Killer: Invisible textual backdoor attacks with syntactic
                      trigger. In: Proc. of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Int’l Joint Conf. on Natural
                      Language Processing, Vol. 1 (Long Papers). ACL, 2021. 443–453. [doi: 10.18653/v1/2021.acl-long.37]
                 [45]  Zhou XK, Li JW, Zhang TW, Lyu LJ, Yang MQ, He J. Backdoor attacks with input-unique triggers in NLP. arXiv:2303.14325, 2023.
                 [46]  Chan  A,  Tay  Y,  Ong  YS,  Zhang  A.  Poison  attacks  against  text  datasets  with  conditional  adversarially  regularized  autoencoder.  In:
                      Findings of the Association for Computational Linguistics: EMNLP 2020. ACL, 2020. 4175–4189. [doi: 10.18653/v1/2020.findings-
                      emnlp.373]
                 [47]  Jin RN, Huang CY, You CY, Li XX. Backdoor attack on unpaired medical image-text foundation models: A pilot study on MedCLIP.
                      arXiv:2401.01911, 2024.
                 [48]  Yao YS, Li HY, Zheng HT, Zhao BY. Latent backdoor attacks on deep neural networks. In: Proc. of the 2019 ACM SIGSAC Conf. on
                      Computer and Communications Security. London: ACM, 2019. 2041–2055. [doi: 10.1145/3319535.3354209]
                 [49]  Shen LJ, Ji SL, Zhang XH, Li JF, Chen J, Shi J, Fang CF, Yin JW, Wang T. Backdoor pre-trained models can transfer to all. In: Proc. of
                      the 2021 ACM SIGSAC Conf. on Computer and Communications Security. ACM, 2021. 3141–3158. [doi: 10.1145/3460120.3485370]
                 [50]  Chen KJ, Meng YX, Sun XF, Guo SW, Zhang TW, Li JW, Fan C. BADPRE: Task-Agnostic backdoor attacks to pre-trained NLP
                      foundation models. arXiv:2110.02467, 2021.
                 [51]  Liu MX, Zhang ZH, Zhang YM, Zhang C, Li Z, Li Q, Duan HX, Sun DH. Automatic generation of adversarial readable Chinese texts.
                      IEEE Trans. on Dependable and Secure Computing, 2023, 20(2): 1756–1770. [doi: 10.1109/TDSC.2022.3164289]
                 [52]  Turner A, Tsipras D, Madry A. Label-Consistent backdoor attacks. arXiv:1912.02771, 2019.
                 [53]  Saha A, Subramanya A, Pirsiavash H. Hidden trigger backdoor attacks. In: Proc. of the 34th AAAI Conf. on Artificial Intelligence. New
                      York: AAAI Press, 2020. 11957–11965. [doi: 10.1609/aaai.v34i07.6871]
                 [54]  Ning R, Li J, Xin CS, Wu HY. Invisible poison: A blackbox clean label backdoor attack to deep neural networks. In: Proc. of the 2021
                      IEEE Conf. on Computer Communications. Vancouver: IEEE, 2021. 1–10. [doi: 10.1109/INFOCOM42981.2021.9488902]
                 [55]  Tan TJL, Shokri R. Bypassing backdoor detection algorithms in deep learning. In: Proc. of the 2020 IEEE European Symp. on Security
                      and Privacy. Genoa: IEEE, 2020. 175–183. [doi: 10.1109/EuroSP48549.2020.00019]
                 [56]  Zhao SH, Ma XJ, Zheng X, Bailey J, Chen JJ, Jiang YG. Clean-label backdoor attacks on video recognition models. In: Proc. of the
                      2020 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020. 14431–14440. [doi: 10.1109/CVPR42600.
                      2020.01445]
   372   373   374   375   376   377   378   379   380   381   382