Page 29 - 《软件学报》2020年第9期
P. 29

2650                                 Journal of Software  软件学报 Vol.31, No.9,  September 2020

         [15]    Yoon J,  Kim  T, Dia O, Kim S,  Bengio Y, Ahn S.  Bayesian  model-agnostic  meta-learning. In: Proc. of the  Advances  in  Neural
             Information Processing Systems. 2018. 7332−7342.
         [16]    Jaderberg  M,  Vedaldi  A,  Zisserman  A. Speeding up  convolutional neural networks  with low rank  expansions. In: Proc. of  the
             British Machine Vision Conf. 2014.
         [17]    Peng C, Zhang X, Yu G, Luo G, Sun J. Large kernel matters—Improve semantic segmentation by global convolutional network. In:
             Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2017. 4353−4361.
         [18]    Rizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: Proc. of the Advances
             in Neural Information Processing Systems. 2012. 1097−1105.
         [19]    Xie S, Girshick R, Dollár P, Tu Z, He K. Aggregated residual transformations for deep neural networks. In: Proc. of the IEEE Conf.
             on Computer Vision And Pattern Recognition. 2017. 1492−1500.
         [20]    Zhang T, Qi GJ, Xiao B, Wang J. Interleaved group convolutions. In: Proc. of the IEEE Int’l Conf. on Computer Vision. 2017.
             4373−4382.
         [21]    Xie G, Wang J, Zhang T, Lai J, Hong R, Qi GJ. Interleaved structured sparse convolutional neural networks. In: Proc. of the IEEE
             Conf. on Computer Vision and Pattern Recognition. 2018. 8847−8856.
         [22]    Mehta S, Rastegari M, Caspi A, Shapiro L, Hajishirzi H. Espnet: Efficient spatial pyramid of dilated convolutions for semantic
             segmentation. In: Proc. of the European Conf. on Computer Vision. 2018. 552−568.
         [23]    Mehta S,  Rastegari M, Shapiro  L, Hajishirzi  H.  Espnetv2: A light-weight, power  efficient,  and general purpose  convolutional
             neural network. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2019. 9190−9200.
         [24]    Li H, Kadav A, Durdanovic I, Samet H, Graf HP. Pruning filters for efficient convnets. In: Proc. of the 5th Int’l Conf. on Learning
             Representations. 2017.
         [25]    Liu Z, Li J, Shen Z, Huang G, Yan S, Zhang C. Learning efficient convolutional networks through network slimming. In: Proc. of
             the IEEE Int’l Conf. on Computer Vision. 2017. 2736−2744.
         [26]    Hu H, Peng R, Tai YW, Tang CK. Network trimming: A data-driven neuron pruning approach towards efficient deep architectures.
             In: Proc. of the 5th Int’l Conf. on Learning Representations. 2015.
         [27]    Tian  Q,  Arbel T,  Clark JJ.  Deep  LDA-pruned nets for  efficient facial  gender  classification. In: Proc. of the IEEE Conf. on
             Computer Vision and Pattern Recognition Workshops. 2017. 10−19.
         [28]    Molchanov P, Tyree S, Karras T, Aila T, Kautz J. Pruning convolutional neural networks for resource efficient inference. In: Proc.
             of the 5th Int’l Conf. on Learning Representations. 2018.
         [29]    Luo JH, Wu J, Lin W. Thinet: A filter level pruning method for deep neural network compression. In: Proc. of the IEEE Int’l Conf.
             on Computer Vision. 2017. 5058−5066.
         [30]    He Y, Zhang X, Sun J. Channel pruning for accelerating very deep neural networks. In: Proc. of the IEEE Int’l Conf. on Computer
             Vision. 2017. 1389−1397.
         [31]    Wen W, Wu C, Wang Y, Chen Y, Li H. Learning structured sparsity in deep neural networks. In: Proc. of the Advances in Neural
             Information Processing Systems. 2016. 2074−2082.
         [32]    Yu R, Li A, Chen CF, Lai JH, Morariu VI, Han X, Davis LS. Nisp: Pruning networks using neuron importance score propagation.
             In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2018. 9194−9203.
         [33]    Gupta S, Agrawal A, Gopalakrishnan K, Barayanan P. Deep learning with limited numerical precision. In: Proc. of the Int’l Conf.
             on Machine Learning. 2015. 1737−1746.
         [34]    Dettmers T. 8-bit  approximations for parallelism in deep learning. In: Proc. of the 4th Int’l  Conf. on Learning Representations.
             2016.
         [35]    Courbariaux M, Bengio Y, David JP. Binaryconnect: Training deep neural networks with binary weights during propagations. In:
             Proc. of the Advances in Neural Information Processing Systems. 2015. 3123−3131.
         [36]    Hubara I, Courbariaux  M,  Soudry D, Yaniv R, Bengio Y. Binarized neural  networks.  In:  Proc.  of  the Advances  in  Neural
             Information Processing Systems, Vol.29. 2016. 4107−4115.
         [37]    Li  F, Zhang B, Liu B. Ternary weight  networks. In: Proc.  of the  IEEE Conf.  on Computer Vision and  Pattern Recognition
             Workshops. 2016.
         [38]    Leng C, Dou Z, Li H, Zhu S, Jin R. Extremely low bit neural network: Squeeze the last bit out with ADMM. In: Proc. of the 32nd
             AAAI Conf. on Artificial Intelligence. 2018.
         [39]    Hu Q, Wang P, Cheng J. From hashing to CNNs: Training binary weight networks via hashing. In: Proc. of the 32nd AAAI Conf.
             on Artificial Intelligence. 2018.
   24   25   26   27   28   29   30   31   32   33   34