Page 30 - 《软件学报》2020年第9期
P. 30

葛道辉  等:轻量级神经网络架构综述                                                               2651


         [40]    Wang P, Hu Q, Zhang Y, Zhang C, Liu Y, Cheng J. Two-Step quantization for low-bit neural networks. In: Proc. of the IEEE Conf.
             on Computer Vision and Pattern Recognition. 2018. 4376−4384.
         [41]    Tung F, Mori G. CLIP-Q: Deep network compression learning by in-parallel pruning-quantization. In: Proc. of the IEEE Conf. on
             Computer Vision and Pattern Recognition. 2018. 7873−7882.
         [42]    Denton EL, Zaremba W, Bruna  J, Cun YL,  Fergus R. Exploiting linear  structure within  convolutional  networks for efficient
             evaluation. In: Proc. of the Advances in Neural Information Processing Systems. 2014. 1269−1277.
         [43]    Zhang X,  Zou J,  He K, Sun J.  Accelerating very deep  convolutional networks for  classification  and detection. IEEE  Trans. on
             Pattern Analysis and Machine Intelligence, 2015,38(10):1943−1955.
         [44]    Lebedev V, Ganin  Y, Rakhuba M, Oseledets  I, Lempitsky V.  Speeding-Up convolutional neural  networks  using  fine-tuned cp-
             decomposition. In: Proc. of the 3rd Int’l Conf. on Learning Representations. 2015.
         [45]    Kim YD, Park E, Yoo S, Choi T, Yang L, Shin D. Compression of deep convolutional neural networks for fast and low power
             mobile applications. In: Proc. of the 4th Int’l Conf. on Learning Representations. 2016.
         [46]    Hinton  G,  Vinyals  O,  Dean J.  Distilling  the knowledge in  a neural network. In: Proc. of the  Advances in Neural Information
             Processing Systems Workshop, Vol.27. 2014.
         [47]    Romero A, Ballas N, Kahou SE, Chassang A, Gatta C, Bengio Y. Fitnets: Hints for thin deep nets. In: Proc. of the 3rd Int’l Conf.
             on Learning Representations. 2015.
         [48]    Zagoruyko S, Komodakis N. Paying more attention to attention: Improving the performance of convolutional neural networks via
             attention transfer. In: Proc. of the 5th Int’l Conf. on Learning Representations. 2017.
         [49]    Lillicrap TP, Hunt JJ, Pritzel A, Heess N, Erez T, Tassa Y, Wierstra D. Continuous control with deep reinforcement learning. In:
             Proc. of the 4th Int’l Conf. on Learning Representations. 2016.
         [50]    Ashok A, Rhinehart N, Beainy F, Kitani KM. N2N learning: Network to network compression via policy gradient reinforcement
             learning. In: Proc. of the 6th Int’l Conf. on Learning Representations. 2018.
         [51]    Wong C, Houlsby N, Lu Y, Gesmundo A. Transfer learning with neural AutoML. In: Proc. of the Advances in Neural Information
             Processing Systems, Vol.31. 2018. 8366−8375.
         [52]    Lin J, Rao Y, Lu J, Zhou J. Runtime neural pruning. In: Proc. of the Advances in Neural Information Processing Systems. 2017.
             2181−2191.
         [53]    Wang H, Zhang Q, Wang Y, Hu H. Structured probabilistic pruning for convolutional neural network acceleration. In: Proc. of the
             British Machine Vision Conf. 2018.
         [54]    Real E, Aggarwal A, Huang Y, Le QV. Regularized evolution for image classifier architecture search. In: Proc. of the AAAI Conf.
             on Artificial Intelligence. 2019.
         [55]    Chen LC, Collins M, Zhu Y, Papandreou G, Zoph B, Schroff F, Adam H, Shlens J. Searching for efficient multi-scale architectures
             for dense image prediction. In: Proc. of the Advances in Neural Information Processing Systems. 2018. 8699−8710.
         [56]    Chollet F. Xception: Deep learning with depth-wise separable convolutions. In: Proc. of the IEEE Conf. on Computer Vision and
             Pattern Recognition. 2017. 1251−1258.
         [57]    Yu  F, Koltun V. Multi-Scale context aggregation  by  dilated convolutions. In:  Proc.  of the 4th  Int’l Conf.  on Learning
             Representations. 2016.
         [58]    Baker B, Gupta O, Naik N, Raskar R. Designing neural network architectures using reinforcement learning. In: Proc. of the 5th Int’l
             Conf. on Learning Representations. 2017.
         [59]    Suganuma M, Shirakawa S, Nagao T. A genetic programming approach to designing convolutional neural network architectures. In:
             Proc. of the Genetic and Evolutionary Computation Conf. 2017. 497−504.
         [60]    Cai H, Chen T, Zhang W, Yu Y, Wang J. Efficient architecture search by network transformation. In: Proc. of the 32nd AAAI Conf.
             on Artificial Intelligence. 2018.
         [61]    Mendoza H, Klein A, Feurer M, Springenberg J, Hutter F. Towards automatically-tuned neural networks. In: Proc. of the Workshop
             on Automatic Machine Learning. 2016. 58−65.
         [62]    Zoph B,  Le  QV. Neural  architecture search with reinforcement learning. In: Proc. of the 5th Int’l  Conf. on  Learning
             Representations. 2017.
         [63]    Szegedy C, Liu W,  Jia Y,  Sermanet  P, Reed RE,  Anguelov D, Erhan D, Vanhoucke  V,  Rabinovich A. Going  deeper with
             convolutions. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2015. 1−9.
         [64]    Liu C, Zoph B, Neumann M, Shlens J, Hua W, Li LJ, Fei L, Yuille AL, Huang J, Murphy K. Progressive neural architecture search.
             In: Proc. of the European Conf. on Computer Vision. 2018. 19−34.
   25   26   27   28   29   30   31   32   33   34   35