Page 305 - 《软件学报》2024年第4期
P. 305

孙家泽 等: 基于可攻击空间假设的陷阱式集成对抗防御网络                                                    1883


                     Cambridge: MIT Press, 1995.
                 [11]  He KM, Zhang XY, Ren SQ, Sun J. Deep residual learning for image recognition. In: Proc. of the 2016 IEEE Conf. on Computer Vision
                     and Pattern Recognition (CVPR). Las Vegas: IEEE, 2016. 770–778. [doi: 10.1109/CVPR.2016.90]
                 [12]  Fawzi A, Moosavi-Dezfooli SM, Frossard P. Robustness of classifiers: From adversarial to random noise. In: Proc. of the 30th Int’l Conf.
                     on Neural Information Processing Systems. Barcelona: Curran Associates Inc., 2016, 29: 1632–1640.
                 [13]  Bengio  Y,  Courville  A,  Vincent  P.  Representation  learning:  A  review  and  new  perspectives.  IEEE  Trans.  on  Pattern  Analysis  and
                     Machine Intelligence, 2013, 35(8): 1798–1828. [doi: 10.1109/TPAMI.2013.50]
                 [14]  Müller R, Kornblith S, Hinton G. When does label smoothing help? In: Proc. of the 33rd Int’l Conf. on Neural Information Processing
                     Systems. Vancouver: Curran Associates Inc., 2019. 422.
                 [15]  Cai XX, Du HM. Survey on adversarial example generation and adversarial attack method. Journal of Xi’an University of Posts and
                     Telecommunications, 2021, 26(1): 67–75 (in  Chinese  with  English  abstract). [doi: 10.13682/j.issn.2095-6533.2021.01.011]
                 [16]  Kurakin A, Goodfellow IJ, Bengio S. Adversarial examples in the physical world. In: Proc. of the 5th Int’l Conf. on Learning Representations.
                     Toulon: OpenReview.net, 2018. 99–112.
                 [17]  Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A. Towards deep learning models resistant to adversarial attacks. In: Proc. of the 6th
                     Int’l Conf. on Learning Representations. Vancouver: OpenReview.net, 2018.
                 [18]  Carlini N, Wagner D. Towards evaluating the robustness of neural networks. In: Proc. of the 2017 IEEE Symp. on Security and Privacy
                     (SP). San Jose: IEEE, 2017. 39–57. [doi: 10.1109/SP.2017.49]
                 [19]  Xiao CW, Li B, Zhu JY, He W, Liu MY, Song D. Generating adversarial examples with adversarial networks. In: Proc. of the 27th Int’l
                     Joint Conf. on Artificial Intelligence. Stockholm: AAAI Press, 2018. 3905–3911. [doi: 10.24963/ijcai.2018/543]
                 [20]  Huang LF, Zhuang WZ, Liao YX, Liu N. Black-box adversarial attack method based on evolution strategy and attention mechanism.
                     Ruan Jian Xue Bao/Journal of Software, 2021, 32(11): 3512–3529 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/
                     6084.htm [doi: 10.13328/j.cnki.jos.006084]
                 [21]  Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A. Practical black-box attacks against machine learning. In: Proc. of the
                     2017 ACM Asia Conf. on Computer and Communications Security. Abu Dhabi: ACM, 2017. 506−519. [doi: 10.1145/3052973.3053009]
                 [22]  Pan WW, Wang XY, Song ML, Chen C. Survey on generating adversarial examples. Ruan Jian Xue Bao/Journal of Software, 2020,
                     31(1): 67–81 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/5884.htm [doi: 10.13328/j.cnki.jos.005884]
                 [23]  Pang  TY,  Du  C,  Dong  YP,  Zhu  J.  Towards  robust  detection  of  adversarial  examples.  In:  Proc.  of  the  32nd  Int ’l  Conf.  on  Neural
                     Information Processing Systems. Montréal: Curran Associates Inc., 2018. 4584–4594.
                 [24]  Pang TY, Du C, Zhu J. Max-mahalanobis linear discriminant analysis networks. In: Proc. of the 35th Int’l Conf. on Machine Learning.
                     Stockholm: PMLR, 2018. 4016–4025.
                 [25]  Metzen JH, Genewein T, Fischer V, Bischoff B. On detecting adversarial perturbations. In: Proc. of the 5th Int’l Conf. on Learning
                     Representations. Toulon: OpenReview.net, 2017.
                 [26]  Feinman R, Curtin RR, Shintre S, Gardner AB. Detecting adversarial samples from artifacts. arXiv:1703.00410, 2017.
                 [27]  Pang TY, Xu K, Du C, Chen N, Zhu J. Improving adversarial robustness via promoting ensemble diversity. In: Proc. of the 36th Int’l
                     Conf. on Machine Learning. Long Beach: PMLR, 2019. 4970–4979.
                 [28]  Van Der Maaten L, Hinton G. Visualizing Data using t-SNE. Journal of Machine Learning Research, 2008, 9(86): 2579–2605.
                 [29]  Liu YP, Chen XY, Liu C, Song D. Delving into transferable adversarial examples and black-box attacks. In: Proc. of the 5th Int’l Conf.
                     on Learning Representations. Toulon: OpenReview.net, 2017.
                 [30]  Xu  WL,  Evans  D,  Qi  YJ.  Feature  squeezing:  Detecting  adversarial  examples  in  deep  neural  networks.  In:  Proc.  of  the  25th  Annual
                     Network and Distributed System Security Symp. Washington: The Internet Society, 2018. 15–26.
                 [31]  Harder P, Pfreundt FJ, Keuper M, Keuper J. SpectralDefense: Detecting adversarial attacks on CNNs in the Fourier domain. In: Proc. of
                     the 2021 Int’l Joint Conf. on Neural Networks. Shenzhen: IEEE, 2021. 1–8. [doi: 10.1109/IJCNN52387.2021.9533442]
                 [32]  Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, VanderPlas JT,
                     Passos  A,  Cournapeau  D,  Brucher  M,  Perrot  M,  Duchesnay  É.  Scikit-learn:  Machine  learning  in  Python.  The  Journal  of  Machine
                     Learning Research, 2011, 12: 2825–2830.
                 [33]  Papernot N, Faghri F, Carlini N, Goodfellow I, Feinman R, Kurakin A, McDaniel P. Adversarial Spheres. arXiv:1801.02774, 2018.
                 [34]  Li  YX,  Jin  W,  Xu  H,  Tang  JL.  DeepRobust:  A  platform  for  adversarial  attacks  and  defenses.  In:  Proc.  of  the  35th  AAAI  Conf.  on
                     Artificial Intelligence. Palo Alto: AAAI Press, 2021. 16078–16080. [doi: 10.1609/aaai.v35i18.18017]
                 [35]  Strauss  T,  Hanselmann  M,  Junginger  A,  Ulmer  H.  Ensemble  methods  as  a  defense  to  adversarial  perturbations  against  deep  neural
                     networks. In: Proc. of the 6th Int’l Conf. on Learning Representations. Vancouver: ICLR, 2018.
   300   301   302   303   304   305   306   307   308   309   310