Page 202 - 《软件学报》2021年第11期
P. 202

3528                                Journal of Software  软件学报 Vol.32, No.11, November 2021

                [11]     Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R. Intriguing properties of neural networks. In: Proc.
                     of the Int’l Conf. on Learning Representations. 2014. 1−10.
                [12]     Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature, 2017,
                     542(7639):115−118.
                [13]     Ma YK, Wu LF, Jian M, Liu FH, Yang Z. Algorithm to generate adversarial examples for face-spoofing detection. Ruan Jian Xue
                     Bao/Journal of Software, 2019,30(2):469−480 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/5568.htm [doi:
                     10.13328/j.cnki.jos.005568]
                [14]    Wang WQ, Wang R, Wang LN, Tang BX. Adversarial examples generation approach for tendency classification on Chinese texts.
                     Ruan Jian  Xue  Bao/Journal of Software, 2019,30(8):2415−2427  (in  Chinese with English abstract).  http://www.jos.org.cn/1000-
                     9825/5765.htm [doi: 10.13328/j.cnki.jos.005765]
                [15]     Sharif M, Bhagavatula S, Bauer L, et al. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In:
                     Proc. of the ACM SIGSAC Conf. on Computer and Communications Security. ACM, 2016. 1528−1540.
                [16]     Athalye A, Engstrom L, Ilyas A, et al. Synthesizing robust adversarial examples. In: Proc. of the Int’l Conf. on Machine Learning.
                     2018. 284−293.
                [17]    Eykholt K, Evtimov I, Fernandes E, et al. Robust physical-world attacks on deep learning visual classification. In: Proc. of the Conf.
                     on Computer Vision and Pattern Recognition. IEEE, 2018. 1625−1634.
                [18]    Thys S, Van Ranst W, Goedeme T. Fooling automated surveillance cameras: Adversarial patches to attack person detection. In:
                     Proc. of the Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2019. 1−7.
                [19]     Goodfellow  IJ, Shlens J, Szegedy S. Explaining  and harnessing  adversarial  examples. In:  Proc. of the  Computer Science. 2014.
                     1−11.
                [20]     Kurakin A, Goodfellow IJ, Bengio S. Adversarial examples in the physical world. In: Proc. of the Artificial Intelligence Safety and
                     Security. Chapman and Hall/CRC, 2018. 99−112.
                [21]     Papernot N, McDaniel P, Jha S, Fredrikson M, Celik Z, Swami A. The limitations of deep learning in adversarial settings. In: Proc.
                     of the IEEE European Symp. on Security and Privacy. IEEE, 2016. 372−387.
                [22]     Carlini N, Wagner D. Towards evaluating the robustness of neural networks. In: Proc. of the IEEE Symp. on Security and Privacy.
                     IEEE, 2017. 39−57.
                [23]     Papernot N, McDaniel P, Goodfellow I, Jha S, Celik Z, Swami A. Practical black-box attacks against machine learning. In: Proc. of
                     the ACM Asia Conf. on Computer and Communications Security. ACM, 2017. 506−519.
                [24]     Dong Y, Pang T, Su H, et al. Evading defenses to transferable adversarial examples by translation-invariant attacks. In: Proc. of the
                     Conf. on Computer Vision and Pattern Recognition. IEEE, 2019. 4312−4321.
                [25]    Zhou  W, Hou X, Chen Y,  et  al. Transferable adversarial  perturbations. In:  Proc.  of  the European Conf.  on Computer Vision
                     (ECCV). 2018. 452−467.
                [26]     Bhagoji AN, He W, Li B, et al. Practical black-box attacks on deep neural networks using efficient query mechanisms. In: Proc. of
                     the European Conf. on Computer Vision. Cham: Springer-Verlag, 2018. 158−174.
                [27]    Chen  PY, Zhang H,  Sharma Y,  et  al.  Zoo:  Zeroth order optimization based black-box  attacks to deep neural networks without
                     training substitute models. In: Proc. of the 10th ACM Workshop on Artificial Intelligence and Security. ACM, 2017. 15−26.
                [28]    Tu CC, Ting P, Chen PY, et al. AutoZOOM: Autoencoder-based zeroth order optimization method for attacking black-box neural
                     networks. In: Proc. of the AAAI Conf. on Artificial Intelligence, Vol.33. 2019. 742−749.
                [29]     Ilyas A, Engstrom L, Athalye A, Lin J. Black-box adversarial attacks with limited queries and information. In Proc. of the 35th Int’l
                     Conf. on Machine Learning. 2018. 2137−2146.
                [30]     Su J, Vargas DV, Sakurai K. One pixel attack for fooling deep neural networks. IEEE Trans. on Evolutionary Computation, 2019,
                     23(5):828−841.
                [31]    Moosavi-Dezfooli SM, Fawzi A, Frossard P. Deepfool: A simple and accurate method to fool deep neural networks. In: Proc. of the
                     Conf. on Computer Vision and Pattern Recognition. IEEE, 2016. 2574−2582.
                [32]     Moosavi-Dezfooli SM, Fawzi A, Fawzi O, et al. Universal adversarial perturbations. In: Proc. of the Conf. on Computer Vision and
                     Pattern Recognition. 2017. 1765−1773.
   197   198   199   200   201   202   203   204   205   206   207