Page 379 - 《软件学报》2025年第7期
P. 379

3300                                                       软件学报  2025  年第  36  卷第  7  期


                 [80]  Kurita K, Michel P, Neubig G. Weight poisoning attacks on pre-trained models. arXiv:2004.06660, 2020.
                 [81]  Wei CA, Lee Y, Chen K, Meng GZ, Lv PZ. Aliasing backdoor attacks on pre-trained models. In: Proc. of the 32nd USENIX Security
                      Symp. Anaheim: USENIX Association, 2023. 2707–2724.
                 [82]  Li HL, Wang YF, Xie XF, Liu Y, Wang SQ, Wan RJ, Chau LP, Kot AC. Light can hack your face! Black-box backdoor attack on face
                      recognition systems. arXiv:2009.06996, 2020.
                 [83]  Rakin AS, He ZZ, Fan DL. TBT: Targeted neural network attack with bit Trojan. In: Proc. of the 2020 IEEE/CVF Conf. on Computer
                      Vision and Pattern Recognition. Seattle: IEEE, 2020. 13195–13204. [doi: 10.1109/CVPR42600.2020.01321]
                 [84]  Chen HL, Fu C, Zhao JS, Koushanfar F. ProFlip: Targeted Trojan attack with progressive bit flips. In: Proc. of the 2021 IEEE/CVF Int’l
                      Conf. on Computer Vision (ICCV). Montreal: IEEE, 2021. 7698–7707. [doi: 10.1109/ICCV48922.2021.00762]
                 [85]  Bagdasaryan  E,  Shmatikov  V.  Blind  backdoors  in  deep  learning  models.  In:  Proc.  of  the  30th  USENIX  Security  Symp.  USENIX
                      Association, 2021. 1505–1521.
                 [86]  Saha A, Tejankar A, Koohpayegani SA, Pirsiavash H. Backdoor attacks on self-supervised learning. In: Proc. of the 2022 IEEE/CVF
                      Conf. on Computer Vision and Pattern Recognition. New Orleans: IEEE, 2022. 13327–13336. [doi: 10.1109/CVPR52688.2022.01298]
                 [87]  Hou RT, Huang T, Yan HY, Ke LS, Tang WX. A stealthy and robust backdoor attack via frequency domain transform. World Wide
                      Web, 2023, 26(5): 2767–2783. [doi: 10.1007/s11280-023-01153-3]
                 [88]  Wang T, Yao Y, Xu F, An SW, Tong HH, Wang T. An invisible black-box backdoor attack through frequency domain. In: Proc. of the
                      17th European Conf. on Computer Vision. Tel Aviv: Springer, 2022. 396–413. [doi: 10.1007/978-3-031-19778-9_23]
                 [89]  Xiang Z, Miller DJ, Chen SH, Li X, Kesidis G. A backdoor attack against 3D point cloud classifiers. In: Proc. of the 2021 IEEE/CVF Int’l
                      Conf. on Computer Vision (ICCV). Montreal: IEEE, 2021. 7577–7587. [doi: 10.1109/ICCV48922.2021.00750]
                 [90]  Gao KF, Bai JW, Wu BY, Ya MX, Xia ST. Imperceptible and robust backdoor attack in 3D point cloud. IEEE Trans. on Information
                      Forensics and Security, 2024, 19: 1267–1282. [doi: 10.1109/TIFS.2023.3333687]
                 [91]  Li SF, Liu H, Dong T, Zhao BZH, Xue MH, Zhu HJ, Lu JL. Hidden backdoors in human-centric language models. In: Proc. of the 2021
                      ACM SIGSAC Conf. on Computer and Communications Security. ACM, 2021. 3123–3140. [doi: 10.1145/3460120.3484576]
                 [92]  Li ZC, Li PJ, Sheng X, Yin CC, Zhou L. IMTM: Invisible multi-trigger multimodal backdoor attack. In: Proc. of the 12th National CCF
                      Conf.  on  Natural  Language  Processing  and  Chinese  Computing.  Foshan:  Springer,  2023.  533–545.  [doi:  10.1007/978-3-031-44696-
                      2_42]
                 [93]  Mei K, Li Z, Wang ZT, Zhang Y, Ma SQ. NOTABLE: Transferable backdoor attacks against prompt-based NLP models. In: Proc. of
                      the 61st Annual Meeting of the Association for Computational Linguistics, Vol. 1 (Long Papers). Toronto: ACL, 2023. 15551–15565.
                      [doi: 10.18653/v1/2023.acl-long.867]
                 [94]  Barni M, Kallas K, Tondi B. A new backdoor attack in CNNS by training set corruption without label poisoning. In: Proc. of the 2019
                      IEEE Int’l Conf. on Image Processing (ICIP). Taipei: IEEE, 2019. 101–105. [doi: 10.1109/ICIP.2019.8802997]
                 [95]  Zhang Q, Ding YF, Tian YQ, Guo JM, Yuan M, Jiang Y. AdvDoor: Adversarial backdoor attack of deep learning system. In: Proc. of
                      the  30th  ACM  SIGSOFT  Int’l  Symp.  on  Software  Testing  and  Analysis.  Virtual:  ACM,  2021.  127–138.  [doi:  10.1145/3460319.
                      3464809]
                 [96]  Shafahi A, Huang WR, Najibi M, Suciu O, Studer C, Dumitras T, Goldstein T. Poison frogs! Targeted clean-label poisoning attacks on
                      neural  networks.  In:  Proc.  of  the  32nd  Conf.  on  Neural  Information  Processing  Systems.  Montréal:  Curran  Associates  Inc.,  2018.
                      6106–6116.
                 [97]  D’Onghia M, Di Cesare F, Gallo L, Carminati M, Polino M, Zanero S. Lookin’ out my backdoor! Investigating backdooring attacks
                      against DL-driven malware detectors. In: Proc. of the 16th ACM Workshop on Artificial Intelligence and Security. Copenhagen: ACM,
                      2023. 209–220. [doi: 10.1145/3605764.3623919]
                 [98]  Li YZ, Li YM, Wu BY, Li LK, He R, Lyu SW. Invisible backdoor attack with sample-specific triggers. In: Proc. of the 2021 IEEE/CVF
                      Int’l Conf. on Computer Vision (ICCV). Montreal: IEEE, 2021. 16443–16452. [doi: 10.1109/ICCV48922.2021.01615]
                 [99]  Ma BH, Zhao C, Wang DJ, Meng B. DIHBA: Dynamic, invisible and high attack success rate boundary backdoor attack with low
                      poison ratio. Computers & Security, 2023, 129: 103212. [doi: 10.1016/j.cose.2023.103212]
                 [100]  Chen B, Carvalho W, Baracaldo N, Ludwig H, Edwards B, Lee T, Molly I, Sricastava B. Detecting backdoor attacks on deep neural
                      networks by activation clustering. arXiv:1811.03728, 2018.
                 [101]  Tran B, Li J, Madry A. Spectral signatures in backdoor attacks. In: Proc. of the 32nd Int’l Conf. on Neural Information Processing
                      Systems. Montreal: Curran Associates Inc., 2018. 8011–8021.
                 [102]  Pan MZ, Zeng Y, Lyu LJ, Lin X, Jia RX. ASSET: Robust backdoor data detection across a multiplicity of deep learning paradigms. In:
                      Proc. of the 32nd USENIX Security Symp. Anaheim: USENIX Association, 2023. 2725–2742.
                 [103]  Ma WL, Wang DR, Sun RX, Xue MH, Wen S, Xiang Y. The “Beatrix” Resurrections: Robust backdoor detection via gram matrices. In:
   374   375   376   377   378   379   380   381   382   383   384