Page 147 - 《软件学报》2020年第9期
P. 147

2768                                 Journal of Software  软件学报 Vol.31, No.9,  September 2020

          [4]    Rodrigues JD, Rebouças Filho PP, Peixoto Jr E, Kumar A, de Albuquerque VH. Classification of EEG signals to detect alcoholism
             using machine learning techniques. Pattern Recognition Letters, 2019,125:140−149.
          [5]    Zhang Y, Li PS, Wang XH. Intrusion detection for IoT based on improved genetic algorithm and deep belief network. IEEE Access,
             2019,7:31711−31722.
          [6]    Athalye A, Carlini N,  Wagner D. Obfuscated gradients  give a  false sense  of security: Circumventing  defenses to adversarial
             examples. In: Proc. of the Int’l Conf. on Machine Learning (ICML). 2018. 274−283.
          [7]    Lu J, Issaranon T, Forsyth D. Safetynet: Detecting and rejecting adversarial examples robustly. In: Proc. of the 2017 IEEE Int’l
             Conf. on Computer Vision (ICCV). 2017. 446−454.
          [8]    Metzen  JH, Genewein T,  Fischer V, Bischoff B. On detecting adversarial  perturbations.  In:  Proc.  of Int’l Conf.  on Learning
             Representations (ICLR). 2017.
          [9]    Carlini N, Wagner D. Adversarial examples are not easily detected: Bypassing ten detection methods. In: Proc. of the 10th ACM
             Workshop on Artificial Intelligence and Security. 2017. 3−14.
         [10]    Liao FZ, Liang M, Dong YP, Pang TY, Zhu J, Hu XL. Defense against adversarial attacks using high-level representation guided
             denoiser. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). 2018. 1778−1787.
         [11]    Pang TY, Xu K, Du C, Chen N, Zhu J. Improving adversarial robustness via promoting ensemble diversity. In: Proc. of Int’l Conf.
             on Machine Learning (ICML). 2019. 4970−4979.
         [12]    Teerapittayanon S, McDanel B, Kung H. BranchyNet: Fast inference via early exiting from deep neural networks. In: Proc. of the
             IEEE Int’l Conf. Pattern Recognition (ICPR). 2016. 2464−2469.
         [13]    Lin TY, Dollár P, Girshick R, He K, Hariharan B, Belongie S. Feature pyramid networks for object detection. In: Proc. of the IEEE
             Conf. on Computer Vision and Pattern Recognition (CVPR). 2017. 936−944.
         [14]    He KM, Zhang XY, Ren SQ, Sun J. Deep residual learning for image recognition. In: Proc. of the IEEE Conf. on Computer Vision
             and Pattern Recognition (CVPR). 2016. 770−778.
         [15]    Ranjan R, Sankaranarayanan S, Castillo CD, Chellappa R. Improving network robustness against adversarial attacks with compact
             convolution. arXiv preprint arXiv:1712.00699, 2017.
         [16]    Miyato T, Maeda SI, Koyama M, Ishii S. Virtual adversarial training: A regularization method for supervised and semi-supervised
             learning. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2018,41(8):1979−1993.
         [17]    Kurakin A, Goodfellow I, Bengio S. Adversarial machine learning at scale. In: Proc. of the Int’l Conf. on Learning Representations
             (ICLR). 2017.
         [18]    Kurakin A, Goodfellow I, Bengio S, Dong YP, Liao FZ, Liang M, Pang TY,Zhu J, Hu, XL, Xie CH, et al. Adversarial attacks and
             defences  competition. In: Proc. of the  NIPS 2017  Competition:  Building Intelligent  Systems.  Cham: Springer-Verlag, 2018.
             195−231.
         [19]    Samangouei P, Kabkab M, Chellappa R. Defense-Gan: Protecting classifiers against adversarial attacks using generative models. In:
             Proc. of the Int’l Conf. on Learning Representations (ICLR). 2018.
         [20]    Guo C, Rana M, Cisse M, Van Der Maaten L. Countering adversarial images using input transformations. In: Proc. of the Int’l Conf.
             on Learning Representations (ICLR). 2018.
         [21]    Lamb A, Binas J, Goyal A, Serdyuk D, Subramanian S, Mitliagkas I, Bengio Y. Fortified networks: Improving the robustness of
             deep networks by modeling the manifold of hidden representations. arXiv preprint arXiv:1804.02485, 2018.
         [22]    Goodfellow  IJ, Shlens J, Szegedy C. Explaining and  harnessing adversarial examples.  In: Proc.  of the  Int’l Conf.  on Learning
             Representations (ICLR). 2015.
         [23]    Kurakin  A,  Goodfellow IJ,  Bengio  S.  Adversarial  examples in  the physical world. In: Proc. of  the Int’l  Conf. on  Learning
             Representations (ICLR) Workshop. 2017.
         [24]    Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A. Towards deep learning models resistant to adversarial attacks. In: Proc. of
             the Int’l Conf. on Learning Representations (ICLR). 2018.
         [25]    Dong YP, Liao FZ, Pang TY, Su H, Hu XL, Li JG, Zhu J. Boosting adversarial attacks with momentum. In: Proc. of the IEEE Conf.
             on Computer Vision and Pattern Recognition (CVPR). 2018. 9185−9193.
   142   143   144   145   146   147   148   149   150   151   152