Page 153 - 《软件学报》2021年第11期
P. 153
宋冰冰 等:自动化张量分解加速卷积神经网络 3479
的实验结果能得到 37 倍的卷积层浮点数计算量降低,浮点数计算量降低,在一定程度上反映出 CNN 运
行时间降低与运算能耗的降低.
通过实验结果分析,本文提出的两种 AutoACNN 算法表现出了更好的加速性能与参数压缩、更少的卷积
层浮点数运算量以及更少的精度损失.
5 总 结
本文利用张量分解的方法来加速卷积神经网络,分析了 CP 分解和 Tucker 分解加速卷积层,提出了自动化
的张量分解加速 CNN.通过基于 MNIST 数据集和 CIFAR-10 数据集的实验,探究了本文设计的基于参数估计的
自动化加速卷积神经网络和基于遗传算法的自动化加速卷积神经网络算法,两种算法能在给定的容忍精度下,
自动求出最优加速性能的神经网络模型,解决了人工在选择秩的过程中导致的繁杂工程量以及不一定无法选
取最优方案的问题.
通过实验可见:自动化的张量分解来加速和压缩卷积神经网络有着良好的表现,为自动化加速和压缩卷积
神经网络提供可靠的解决方案.
References:
[1] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. IEEE Institute of Electrical
and Electronics Engineers, 1998,86(11):2278−2324.
[2] Bai C, Huang L, Chen JN, Pan X, Chen SY. Optimization of deep convolutional neural network for large scale image classification.
Ruan Jian Xue Bao/Journal of Software, 2018,29(4):1029−1038 (in Chinese with English abstract). http://www.jos.org.cn/1000-
9825/5404.htm [doi: 10.13328/j.cnki.jos.005404]
[3] Zhou FY, Jin LP, Dong J. A review of convolutional neural networks. Chinese Journal of Computers, 2017,40(6):1229−1251 (in
Chinese with English abstract). [doi: 10.11897/SP.J.1016.2017.01229]
[4] Howard AG, Zhu M, Chen B, Kalenichenko D, Wang WJ, Weyand T, Andreetto M, Adam H. Mobilenets: Efficient convolutional
neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
[5] Ji RR, Lin SH, Chao F. A survey of deep neural network compression and acceleration. Journal of Computer Research and
Development, 2018,55(9):1871−1888 (in Chinese with English abstract). [doi: 10.7544/issn1000-1239.2018.20180129]
[6] Denil M, Shakibi B, Dinh L, Ranzato M, DeFreitas N. Predicting parameters in deep learning. In: Proc. of the 2013 MIT Press Conf.
on Neural Information Processing Systems (NIPS). 2013. 2148−2156.
[7] Welling M, Weber M. Positive tensor factorization. Pattern Recognition Letters, 2001,22(12):1255−1261. [doi: https://doi.org/10.
1016/S0167-8655(01)00070-8]
[8] Kim YD, Choi S. Nonnegative Tucker decomposition. In: Proc. of the 2007 IEEE Conf. on Computer Vision and Pattern
Recognition (CVPR 2007). 2007.
[9] Hazan T, Polak S, Shashua A. Sparse image coding using a 3D non-negative tensor factorization. In: Proc. of the 2005 IEEE Conf.
on 10th IEEE Int’l Conf. on Computer Vision (ICCV 2005). 2005. 50−57. [doi: 10.1109/ICCV.2005.228]
[10] Nion D, Sidiropoulos ND. Tensor algebra and multidimensional harmonic retrieval in signal processing for MIMO radar. IEEE
Trans. on Signal Processing, 2010,58(11):5693−5705.
[11] Benetos E, Kotropoulos C, Lidy T, Rauber A. Testing supervised classifiers based on non-negative matrix factorization to musical
instrument classification. In: Proc. of the 14th European Signal Processing Conf. 2006. 1−5.
[12] Chen Z, Lu Y. CubeSVD: A novel approach to personalized Web search*. In: Proc. of the 14th Int’l World Wide Web Conf. 2005.
382−390. [doi: 10.1145/1060745.1060803]
[13] Rendle S, Marinho LB, Nanopoulos A, Schmidt-Thieme L. Learning optimal ranking with tensor factorization for tag
recommendation. In: Proc. of the KDD 2009. 2009. 727−736.
[14] Xiong L, Chen X, Huang T, Schneider J, Carbonell J. Temporal collaborative filtering with Bayesian probabilistic tensor
factorization. In: Proc. of the SIAM Int’l Conf. on Data Mining (SDM). 2010. 211−222.