Page 378 - 《软件学报》2025年第7期
P. 378

高梦楠 等: 面向深度学习的后门攻击及防御研究综述                                                       3299


                 [57]  Cheng SY, Liu YQ, Ma SQ, Zhang XY. Deep feature space Trojan attack of neural networks by controlled detoxification. In: Proc. of
                      the 35th AAAI Conf. on Artificial Intelligence. Virtually: AAAI Press, 2021. 1148–1156. [doi: 10.1609/aaai.v35i2.16201]
                 [58]  Hammoud HAAK, Ghanem B. Check your other door! Creating backdoor attacks in the frequency domain. In: Proc. of the 33rd British
                      Machine Vision Conf. London: BMVA Press, 2022. 259.
                 [59]  Liu XR, Tan YA, Wang YJ, Qiu KF, Li YZ. Stealthy low-frequency backdoor attack against deep neural networks. arXiv:2305.09677,
                      2023.
                 [60]  Yue C, Lv PZ, Liang RG, Chen K. Invisible backdoor attacks using data poisoning in frequency domain. In: Proc. of the 26th European
                      Conf. on Artificial Intelligence. Kraków: IOS Press, 2023. 2954–2961. [doi: 10.3233/FAIA230610]
                 [61]  Li XK, Chen ZR, Zhao Y, Tong ZK, Zhao YB, Lim A, Zhou JT. PointBA: Towards backdoor attacks in 3D point cloud. In: Proc. of the
                      2021 IEEE Int’l Conf. on Computer Vision (ICCV). Montreal: IEEE, 2021. 16472–16481. [doi: 10.1109/ICCV48922.2021.01618]
                 [62]  Fan LK, He FZ, Si TZ, Fan RB, Ye CL, Li B. MBA: Backdoor attacks against 3D mesh classifier. IEEE Trans. on Information Forensics
                      and Security, 2024, 19: 2127–2142. [doi: 10.1109/TIFS.2023.3346644]
                 [63]  Sasaki S, Hidano S, Uchibayashi T, Suganuma T, Hiji M, Kiyomoto S. On embedding backdoor in malware detectors using machine
                      learning. In: Proc. of the 17th Int’l Conf. on Privacy, Security and Trust. Fredericton: IEEE, 2019. 1–5. [doi: 10.1109/PST47121.2019.
                      8949034]
                 [64]  Li CR, Chen X, Wang DR, Wen S, Ahmed ME, Camtepe S, Xiang Y. Backdoor attack on machine learning based android malware
                      detectors. IEEE Trans. on Dependable and Secure Computing, 2022, 19(5): 3357–3370. [doi: 10.1109/TDSC.2021.3094824]
                 [65]  Tian JW, Qiu KF, Gao DB, Wang Z, Kuang XH, Zhao G. Sparsity brings vulnerabilities: Exploring new metrics in backdoor attacks. In:
                      Proc. of the 32nd USENIX Security Symp. Anaheim: USENIX Association, 2023. 2689–2706.
                 [66]  Salem A, Wen R, Backes M, Ma SQ, Zhang Y. Dynamic backdoor attacks against machine learning models. In: Proc. of the 7th IEEE
                      European Symp. on Security and Privacy (EuroS&P). Genoa: IEEE, 2022. 703–718. [doi: 10.1109/EuroSP53844.2022.00049]
                 [67]  Nguyen  TA,  Tran  TA.  Input-aware  dynamic  backdoor  attack.  In:  Proc.  of  the  34th  Int’l  Conf.  on  Neural  Information  Processing
                      Systems. Vancouver: Curran Associates Inc., 2020. 3454–3464.
                 [68]  Doan K, Lao YJ, Zhao WJ, Li P. LIRA: Learnable, imperceptible and robust backdoor attacks. In: Proc. of the 2021 IEEE/CVF Int’l
                      Conf. on Computer Vision (ICCV). Montreal: IEEE, 2021. 11946–11956. [doi: 10.1109/ICCV48922.2021.01175]
                 [69]  Gong XL, Chen YJ, Wang Q, Huang HY, Meng LS, Shen C, Zhang Q. Defense-resistant backdoor attacks against deep neural networks
                      in outsourced cloud environment. IEEE Journal on Selected Areas in Communications, 2021, 39(8): 2617–2631. [doi: 10.1109/JSAC.
                      2021.3087237]
                 [70]  Xue MF, Ni SF, Wu YH, Zhang YS, Liu WQ. Imperceptible and multi-channel backdoor attack. Applied Intelligence, 2024, 54(1):
                      1099–1116. [doi: 10.1007/s10489-023-05228-6]
                 [71]  Chow KH, Wei WQ, Yu L. Imperio: Language-guided backdoor attacks for arbitrary model control. In: Proc. of the 33rd Int’l Joint
                      Conf. on Artificial Intelligence. 2024. 704–712. [doi: 10.24963/ijcai.2024/78]
                 [72]  Liu YQ, Ma SQ, Aafer Y, Lee WC, Zhai J, Wang WH, Zhang XY. Trojaning attack on neural networks. In: Proc. of the 25th Annual
                      Network and Distributed System Security Symp. San Diego: Internet Society, 2018. [doi: 10.14722/ndss.2018.23291]
                 [73]  Lv PZ, Ma HL, Zhou JC, Liang RG, Chen K, Zhang SZ, Yang YF. DBIA: Data-free backdoor injection attack against transformer
                      networks. arXiv:2111.11870, 2021.
                 [74]  Lv PZ, Yue C, Liang RG, Yang YF, Zhang SZ, Ma HL, Chen K. A data-free backdoor injection approach in neural networks. In: Proc.
                      of the 32nd USENIX Security Symp. Anaheim: USENIX Association, 2023. 2671–2688.
                 [75]  Yu Y, Wang YF, Yang WH, Lu SJ, Tan YP, Kot AC. Backdoor attacks against deep image compression via adaptive frequency trigger.
                      In: Proc. of the 2023 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Vancouver: IEEE, 2023. 12250–12259. [doi: 10.
                      1109/CVPR52729.2023.01179]
                 [76]  Yang WK, Li L, Zhang ZY, Ren XC, Sun X, He B. Be careful about poisoned word embeddings: Exploring the vulnerability of the
                      embedding layers in NLP models. In: Proc. of the 2021 Conf. of the North American Chapter of the Association for Computational
                      Linguistics: Human Language Technologies. ACL, 2021. 2048–2058. [doi: 10.18653/v1/2021.naacl-main.165]
                 [77]  Li LY, Song DM, Li XN, Zeng JH, Ma RT, Qiu XP. Backdoor attacks on pre-trained models by layerwise weight poisoning. In: Proc. of
                      the 2021 Conf. on Empirical Methods in Natural Language Processing. ACL, 2021. 3023–3032. [doi: 10.18653/v1/2021.emnlp-main.
                      241]
                 [78]  Tang RX, Du MN, Liu NH, Yang F, Hu X. An embarrassingly simple approach for Trojan attack in deep neural networks. In: Proc. of
                      the 26th ACM SIGKDD Int’l Conf. on Knowledge Discovery & Data Mining. ACM, 2020. 218–228. [doi: 10.1145/3394486.3403064]
                 [79]  Hong S, Carlini N, Kurakin A. Handcrafted backdoors in deep neural networks. In: Proc. of the 36th Conf. on Neural Information
                      Processing Systems. New Orleans: Curran Associates Inc., 2022. 8068–8080.
   373   374   375   376   377   378   379   380   381   382   383