Page 473 - 《软件学报》2025年第4期
P. 473

郗来乐 等: 智能网联汽车自动驾驶安全: 威胁、攻击与防护                                                   1879


                      arXiv:1312.6199, 2014.
                 [89]  Creswell A, White T, Dumoulin V, Arulkumaran K, Sengupta B, Bharath AA. Generative adversarial networks: An overview. IEEE
                      Signal Processing Magazine, 2018, 35(1): 53–65. [doi: 10.1109/MSP.2017.2765202]
                 [90]  Dumford J, Scheirer W. Backdooring convolutional neural networks via targeted weight perturbations. In: Proc. of the 2020 IEEE Int’l
                      Joint Conf. on Biometrics (IJCB). Houston: IEEE, 2020. 1–9. [doi: 10.1109/IJCB48548.2020.9304875]
                 [91]  Rudd EM, Rozsa A, Günther M, Boult TE. A survey of stealth malware attacks, mitigation measures, and steps toward autonomous open
                      world solutions. IEEE Communications Surveys & Tutorials, 2017, 19(2): 1145–1172. [doi: 10.1109/COMST.2016.2636078]
                 [92]  Costales R, Mao CZ, Norwitz R, Kim B, Yang JF. Live Trojan attacks on deep neural networks. In: Proc. of the 2020 IEEE/CVF Conf.
                      on Computer Vision and Pattern Recognition Workshops. Seattle: IEEE, 2020. 3460–3469. [doi: 10.1109/CVPRW50498.2020.00406]
                 [93]  Zhang QX, Ma WC, Wang YJ, Zhang YY, Shi ZW, Li YZ. Backdoor attacks on image classification models in deep neural networks.
                      Chinese Journal of Electronics, 2022, 31(2): 199–212. [doi: 10.1049/cje.2021.00.126]
                 [94]  Tang RX, Du MN, Liu NH, Yang F, Hu X. An embarrassingly simple approach for Trojan attack in deep neural networks. In: Proc. of
                      the 26th ACM SIGKDD Int’l Conf. on Knowledge Discovery & Data Mining. ACM, 2020. 218–228. [doi: 10.1145/3394486.3403064]
                 [95]  Li  YC,  Hua  JY,  Wang  HY,  Chen  CY,  Liu  YX.  DeepPayload:  Black-box  backdoor  attack  on  deep  learning  models  through  neural
                      payload injection. In: Proc. of the 43rd IEEE/ACM Int’l Conf. on Software Engineering (ICSE). Madrid: IEEE, 2021. 263–274. [doi: 10.
                      University of Information Engineering, 2018 (in Chinese).
                      1109/ICSE43902.2021.00035]
                 [96]  Tramèr F, Zhang F, Juels A, Reiter MK, Ristenpart T. Stealing machine learning models via prediction APIs. In: Proc. of the 25th
                      USENIX Conf. on Security Symp. Austin: USENIX Association, 2016. 601–618.
                 [97]  Truong JB, Maini P, Walls RJ, Papernot N. Data-free model extraction. In: Proc. of the 2021 IEEE/CVF Conf. on Computer Vision and
                      Pattern Recognition. Nashville: IEEE, 2021. 4769–4778. [doi: 10.1109/CVPR46437.2021.00474]
                 [98]  Shokri R, Stronati M, Song CZ, Shmatikov V. Membership inference attacks against machine learning models. In: Proc. of the 2017
                      IEEE Symp. on Security and Privacy (SP). San Jose: IEEE, 2017. 3–18. [doi: 10.1109/SP.2017.41]
                 [99]  Fredrikson M, Jha S, Ristenpart T. Model inversion attacks that exploit confidence information and basic countermeasures. In: Proc. of
                      the 22nd ACM SIGSAC Conf. on Computer and Communications Security. Denver: ACM, 2015. 1322–1333. [doi: 10.1145/2810103.
                      2813677]
                 [100]  Zhu LG, Liu ZJ, Han S. Deep leakage from gradients. In: Proc. of the 33rd Int’l Conf. on Neural Information Processing Systems.
                      Vancouver: Curran Associates Inc., 2019. 14774–14784.
                 [101]  Lin SC, Zhang YQ, Hsu CH, Skach M, Haque ME, Tang LJ, Mars J. The architectural implications of autonomous driving: Constraints
                      and  acceleration.  In:  Proc.  of  the  23rd  Int’l  Conf.  on  Architectural  Support  for  Programming  Languages  and  Operating  Systems.
                      Williamsburg: ACM, 2018. 751–766. [doi: 10.1145/3173162.3173191]
                 [102]  Cheng ZY, Wu BY, Zhang ZY, Zhao JJ. TAT: Targeted backdoor attacks against visual object tracking. Pattern Recognition, 2023, 142:
                      109629. [doi: 10.1016/j.patcog.2023.109629]
                 [103]  Zhang KY, Song X, Zhang CH, Yu S. Challenges and future directions of secure federated learning: A survey. Frontiers of Computer
                      Science, 2022, 16(5): 165817. [doi: 10.1007/s11704-021-0598-z]
                 [104]  McMahan  B,  Moore  E,  Ramage  D,  Hampson  S,  Aguera  y  Arcas  B.  Communication-efficient  learning  of  deep  networks  from
                      decentralized data. In: Proc. of the 20th Int’l Conf. on Artificial Intelligence and Statistics. Fort Lauderdale: PMLR, 2017. 1273–1282.
                 [105]  Peri N, Gupta N, Huang WR, Fowl L, Zhu C, Feizi S, Goldstein T, Dickerson JP. Deep K-NN defense against clean-label data poisoning
                      attacks. In: Proc. of the 16th European Conf. on Computer Vision. Glasgow: Springer, 2020. 55–70. [doi: 10.1007/978-3-030-66415-
                      2_4]
                 [106]  Rosenfeld E, Winston E, Ravikumar P, Kolter JZ. Certified robustness to label-flipping attacks via randomized smoothing. In: Proc. of
                      the 37th Int’l Conf. on Machine Learning. Virtual Event: JMLR.org, 2020. 8230–8241.
                 [107]  Xiao P, Li YY, Li XH. Design and implementation of firewall based on MOST. Science and Technology & Innovation, 2009, 25(21):
                      57–58, 61 (in Chinese with English abstract). [doi: 10.3969/j.issn.1008-0570.2009.21.024]
                 [108]  Wu YH. Research on vehicle CAN network intrusion detection system based on neural networks [MS. Thesis]. Chengdu: Chengdu

                 [109]  Wei K, Li J, Ding M, Ma C, Yang HH, Farokhi F, Jin S, Quek TQS, Vincent Poor H. Federated learning with differential privacy:
                      Algorithms and performance analysis. IEEE Trans. on Information Forensics and Security, 2020, 15: 3454–3469. [doi: 10.1109/TIFS.
                      2020.2988575]
                 [110]  Wang JX, Guo S, Xie X, Qi H. Protect privacy from gradient leakage attack in federated learning. In: Proc. of the 2022 IEEE Conf. on
                      Computer Communications. London: IEEE, 2022. 580–589. [doi: 10.1109/INFOCOM48880.2022.9796841]
   468   469   470   471   472   473   474   475   476   477   478