Page 154 - 《软件学报》2021年第11期
P. 154

3480                                Journal of Software  软件学报 Vol.32, No.11, November 2021

                [15]    Karatzoglou A, Amatriain X, Baltrunas L, Oliver N. Multiverse recommendation: N-dimensional tensor factorization for context-
                     aware collaborative filtering. In: Proc. of the 4th ACM Conf. on Recommender Systems. ACM, 2010. 79−86.
                [16]    Shi Y, Karatzoglou  A, Baltrunas L, Larson M, Hanjalic A,  Oliver N. TFMAP: Optimizing map  for top-N context-aware
                     recommendation. In: Proc. of the 35th Int’l ACM SIGIR  Conf. on Research  and Development  in Information Retrieval. 2012.
                     155−164.
                [17]    LeCun Y, Denker JS, Solla S. Optimal Brain Damage. 1990. 598−605.
                [18]    Hassibi B, Stork DG. Second order derivatives for network pruning: Optimal brain surgeon. In: Proc. of the 1993 MIT Press Conf.
                     on Neural Information Processing Systems (NIPS). 1993. 164−171.
                [19]    Srinivas S, Babu RV. Data-free parameter pruning for deep neural networks. arXiv preprint arXiv:1507.06149, 2015.
                [20]    Han S, Mao H, Dally WJ. Deep compression: Compressing deep neural networks with pruning trained quantization and Huffman
                     coding. arXiv preprint arXiv:1510.00149, 2015.
                [21]    Han S, Pool J, Tran J, Dally W. Learning both weights and connections for efficient neural network. In: Proc. of the 2015 MIT
                     Press Conf. on Neural Information Processing Systems (NIPS). Cambridge: MIT Press, 2015. 1135−1143.
                [22]    Lebedev V, Lempitsky V. Fast ConvNets using group-wise brain damage. In: Proc. of the 2016 IEEE Conf. on Computer Vision
                     and Pattern Recognition (CVPR 2016). Piscataway: IEEE, 2016. 2554−2564.
                [23]    Molchanov P, Tyree S, Karras T, Aila T, Kautz J. Pruning convolutional neural networks for resource efficient inference. arXiv
                     preprint arXiv:1611.06440, 2017.
                [24]    Gong Y, Liu L, Yang M, Bourdev L. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:
                     1412.6115, 2014.
                [25]    Gupta S, Agrawal A, Gopalakrishnan K, Narayanan P. Deep learning with limited numerical precision. In: Proc. of the 32nd Int’l
                     Conf. on Machine Learning (ICML). 2015. 1737−1746.
                [26]    Li F, Zhang B, Liu B. Ternary weight networks. arXiv preprint arXiv:1605.04711, 2016.
                [27]    Courbariaux M, Bengio Y, David J. BinaryConnect: Training deep neural networks with binary weights during propagations. In:
                     Proc. of the 2015 MIT Press Conf. on Neural Information Processing Systems (NIPS). 2015. 3123−3131.
                [28]    Chollet F. Xception: Deep learning with depthwise separable convolutions. In: Proc. of the 2017 IEEE Conf. on Computer Vision
                     and Pattern Recognition (CVPR). 2017. 1800−1807.
                [29]    Zhang X, Zhou X, Lin M, Sun J. ShuffleNet: An extremely efficient convolutional neural network for mobile devices. In: Proc. of
                     the 2018 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). 2018. 6848−6856.
                [30]    Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
                [31]    Roberto R, Sironi A, Lepetit V, Fua P. Learning separable filters. In: Proc. of the 2013 IEEE Conf. on Computer Vision and Pattern
                     Recognition (CVPR). 2013. 2754−2761.
                [32]    Denton E, Zaremba W, Bruna J. Exploiting linear structure within convolutional networks for efficient evaluation. In: Proc. of the
                     2014 MIT Press Conf. on Neural Information Processing Systems (NIPS). 2014. 1269−1277.
                [33]    Jaderberg M, Vedaldi A, Zisserman A.  Speeding  up convolutional  neural  networks  with  low  rank expansions. arXiv  preprint
                     arXiv:1405.3866, 2014.
                [34]    Lebedev V,  Ganin  Y,  Rakhuba  M,  Oseledets I,  Lempitsky V. Speeding-up  convolutional neural networks using fine-tuned  CP-
                     decomposition. arXiv preprint arXiv:1412.6553, 2014.
                [35]    Kim YD, Park E, Yoo S, Choi T, Yang L, Shin DJ. Compression of deep convolutional neural networks for fast and low power
                     mobile applications. Computer Science, 2015,71(2):576−584.
                [36]    Girshick R. Fast R-CNN. In: Proc. of the IEEE Int’l Conf. on Computer Vision (ICCV). 2015. 1440−1448.
                [37]    Tai C, Xiao T, Zhang Y, Wang XG, E WN. Convolutional neural networks with low-rank regularization. arXiv preprint arXiv:
                     1511.06067, 2016.
                [38]    Wen W, Xu C, Wu C, Wang YD, Chen YR, Li H. Coordinating filters for faster deep neural networks. In: Proc. of the 2017 IEEE
                     Int’l Conf. on Computer Vision (ICCV). 2017. 658−666.
                [39]    Yao Q, Wang MS, Chen YQ, Dai WY, Li YF, Tu WW, Qiang Y, Yu Y. Taking human out of learning applications: A survey on
                     automated machine learning. arXiv preprint arXiv:1810.13306, 2018.
   149   150   151   152   153   154   155   156   157   158   159