Page 228 - 《软件学报》2025年第5期
P. 228

2128                                                       软件学报  2025  年第  36  卷第  5  期


                     Data Mining. Atlantic City: IEEE, 2015. 301–309. [doi: 10.1109/ICDM.2015.84]
                 [17]  Jakubovitz D, Giryes R. Improving DNN robustness to adversarial attacks using Jacobian regularization. In: Proc. of the 15th European
                     Conf. on Computer Vision. Munich: Springer, 2018. 525–541. [doi: 10.1007/978-3-030-01258-8_32]
                 [18]  Chan A, Tay Y, Ong YS, Fu J. Jacobian adversarially regularized networks for robustness. In: Proc. of the 8th Int’l Conf. on Learning
                     Representation. Addis Ababa: OpenReview.net, 2020. 1–13.
                 [19]  Moosavi-Dezfooli SM, Fawzi A, Uesato J, Frossard P. Robustness via curvature regularization, and vice versa. In: Proc. of the 2019
                     IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 9070–9078. [doi: 10.1109/CVPR.2019.00929]
                 [20]  Guo C, Rana M, Cissé M, van der Maaten L. Countering adversarial images using input transformations. In: Proc. of the 6th Int’l Conf.
                     on Learning Representations. Vancouver: OpenReview.net, 2018. 1–12.
                 [21]  Rebuffi SA, Gowal S, Calian DA, Stimberg F, Wiles O, Mann TA. Data augmentation can improve robustness. In: Proc. of the 35th Conf.
                     on Neural Information Processing Systems. 2021. 29935–29948.
                 [22]  Wang YS, Zou DF, Yi JF, Bailey J, Ma XJ, Gu QQ. Improving adversarial robustness requires revisiting misclassified examples. In: Proc.
                     of the 8th Int’l Conf. on Learning Representations. Addis Ababa: OpenReview.net, 2020. 1–13.
                 [23]  Chen ZM, Xue W, Tian WW, Wu YH, Hua B. Toward deep neural networks robust to adversarial examples, using augmented data
                     importance perception. Journal of Electronic Imaging, 2022, 31(6): 063046. [doi: 10.1117/1.JEI.31.6.063046]
                 [24]  Jin GJ, Yi XP, Wu DY, Mu RH, Huang XW. Randomized adversarial training via Taylor expansion. In: Proc. of the 2023 IEEE/CVF
                     Conf. on Computer Vision and Pattern Recognition. Vancouver: IEEE, 2023. 16447–16457. [doi: 10.1109/CVPR52729.2023.01578]
                 [25]  Tsipras D, Santurkar S, Engstrom LG, Turner AM, Madry A. Robustness may be at odds with accuracy. In: Proc. of the 7th Int’l Conf. on
                     Learning Representations. New Orleans, 2019. 1–24.
                 [26]  Zhang HY, Yu YD, Jiao JT, Xing E, El Ghaoui L, Jordan M. Theoretically principled trade-off between robustness and accuracy. In:
                     Proc. of the 36th Int’l Conf. on Machine Learning. Long Beach: PMLR, 2019. 12907–12929.
                 [27]  Co KT, Martinez-Rego D, Hau Z, Lupu EC. Jacobian ensembles improve robustness trade-offs to adversarial attacks. In: Proc. of the 31st
                     Int’l Conf. on Artificial Neural Networks. Bristol: Springer, 2022. 680–691. [doi: 10.1007/978-3-031-15934-3_56]
                 [28]  Grabinski  J,  Gavrikov  P,  Keuper  J,  Keuper  M.  Robust  models  are  less  over-confident.  In:  Proc.  of  the  36th  Int’l  Conf.  on  Neural
                     Information Processing Systems. New Orleans: Curran Associates Inc., 2022. 2831.
                 [29]  Zhang JF, Xu XL, Han B, Niu G, Cui LZ, Sugiyama M, Kankanhalli M. Attacks which do not kill training make adversarial learning
                     stronger. In: Proc. of the 37th Int’l Conf. on Machine Learning. PMLR, 2020. 11278–11287.
                 [30]  Sharma A, Narayan A. Soft adversarial training can retain natural accuracy. In: Proc. of the 14th Int’l Conf. on Agents and Artificial
                     Intelligence. SciTePress, 2022. 1–7. [doi: 10.5220/0010871000003116]
                 [31]  Gehr T, Mirman M, Drachsler-Cohen D, Tsankov P, Chaudhuri S, Vechev M. AI2: Safety and robustness certification of neural networks
                     with abstract interpretation. In: Proc. of the 2018 IEEE Symp. on Security and Privacy. San Francisco: IEEE, 2018. 3–18. [doi: 10.1109/
                     SP.2018.00058]
                 [32]  Takase  T.  Feature  combination  mixup:  Novel  mixup  method  using  feature  combination  for  neural  networks.  Neural  Computing  and
                     Applications, 2023, 35(17): 12763–12774. [doi: 10.1007/s00521-023-08421-3]
                 [33]  Papernot N, McDaniel P, Wu X, Jha S, Swami A. Distillation as a defense to adversarial perturbations against deep neural networks. In:
                     Proc. of the 2016 IEEE Symp. on Security and Privacy. San Jose: IEEE, 2016. 582–597. [doi: 10.1109/SP.2016.41]
                 [34]  Dong  NQ,  Wang  JY,  Voiculescu  I.  Revisiting  vicinal  risk  minimization  for  partially  supervised  multi-label  classification  under  data
                     scarcity. In: Proc. of the 2022 IEEE/CVF Conf. on Computer Vision and Pattern Recognition Workshops. New Orleans: IEEE, 2022.
                     4211–4219. [doi: 10.1109/CVPRW56347.2022.00466]
                 [35]  Hoffman  J,  Roberts  DA,  Yaida  S.  Robust  learning  with  Jacobian  regularization.  In:  Proc.  of  the  2020  Int’l  Conf.  on  Learning
                     Representations. 2020. 1–21.
                 [36]  Le BM, Tariq S, Woo SS. OTJR: Optimal Transport Meets Optimal Jacobian Regularization for Adversarial Robustness. In: Proc. of the
                     23rd IEEE Conf. on Computer Vision and Pattern Recognition. 2023. 7551–7562. [doi: 10.48550/arXiv.2303.11793]

                 附中文参考文献:
                 [5]  卢泓宇, 张敏, 刘奕群, 马少平. 卷积神经网络特征重要性分析及增强特征选择模型. 软件学报, 2017, 28(11): 2879–2890. http://www.
                    jos.org.cn/1000-9825/5349.htm [doi: 10.13328/j.cnki.jos.005349]
                 [6]  白琮, 黄玲, 陈佳楠, 潘翔, 陈胜勇. 面向大规模图像分类的深度卷积神经网络优化. 软件学报, 2018, 29(4): 1029–1038. http://www.jos.
                    org.cn/1000-9825/5404.htm [doi: 10.13328/j.cnki.jos.005404]
   223   224   225   226   227   228   229   230   231   232   233