Page 124 - 《软件学报》2021年第8期
P. 124
2406 Journal of Software 软件学报 Vol.32, No.8, August 2021
[22] Jacob B, Kligys S, Chen B, Zhu M, Tang M, Howard AG, Adam H, Kalenichenko D. Quantization and training of neural networks
for efficient integer-arithmetic-only inference. In: Proc. of the CVPR. Salt Lake City: IEEE Computer Society, 2018. 2704−2713.
[23] Jain SR, Gural A, Wu M, Dick C. Trained uniform quantization for accurate and efficient neural network inference on fixed-point
hardware. arXiv preprint arXiv:1903.08066, 2016.
[24] Bishop CM. Pattern Recognition and Machine Learning. Springer-Verlag, 2006.
[25] Murphy KP. Machine Learning: A Probabilistic Perspective. MIT Press, 2012.
[26] Zhu C, Han S, Mao H, Dally WJ. Trained ternary quantization. In: Proc. of the ICLR. 2017. https://openreview.net/pdf?id=S1_pA
u9xl
[27] Jin C, Sun H, Kimura S. Sparse ternary connect: Convolutional neural networks using ternarized weights with enhanced sparsity. In:
Shin Y, ed. Proc. of the ASP-DAC. IEEE, 2018. 190−195. [doi: 10.1109/ASPDAC.2018.8297304]
[28] Lin DD, Talathi SS, Annapureddy VS. Fixed point quantization of deep convolutional networks. In: Balcan M, Weinberger KQ, eds.
Proc. of the ICML. New York, 2016. 2849−2858.
[29] Polino A, Pascanu R, Alistarh D. Model compression via distillation and quantization. In: Proc. of the ICLR. 2018. https://openrevi
ew.net/pdf?id=S1XolQbRW
[30] Wang P, Hu Q, Zhang Y, Zhang C, Liu Y, Cheng J. Two-Step quantization for low-bit neural networks. In: Proc. of the CVPR.
IEEE Computer Society, 2018. 4376−4384. [doi: 10.1109/CVPR.2018.00460]
[31] Gong C, Li T, Lu Y, Hao C, Zhang X, Chen D, Chen Y. μL2Q: An ultra-low loss quantization method for DNN compression. In:
Proc. of the IJCNN. IEEE, 2019. 1−8. [doi: 10.1109/IJCNN.2019.8851699]
[32] Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick RB, Guadarrama S, Darrell T. Caffe: Convolutional architecture for
fast feature embedding. In: Hua KA, ed. Proc. of the 22nd ACM Int’l Conf. on Multimedia. ACM, 2014. 675−678. [doi:
10.1145/2647868.2654889]
[33] Chollet F, et al. In GitHub repository. 2015. https://github.com/keras-team/keras
[34] Le Cun Y, Bottou L, Bengio Y, Haffner P. Gradient-Based learning applied to document recognition. Proc. of the IEEE, 1998,
86(11):2278−2324. [doi: 10.1109/5.726791]
[35] Krizhevsky A, Hinton G. Learning multiple layers of features from tiny images. 2009. http://www.cs.toronto.edu/~kriz/learning-
features-2009-TR.pdf
[36] Deng J, Dong W, Socher R, Li L, Li K, Li F. Imagenet: A large-scale hierarchical image database. In: Proc. of the CVPR. IEEE
Computer Society, 2009. 248−255. [doi: 10.1109/CVPR.2009.5206848]
[37] Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Communications of the
ACM, 2017,60(6):84−90. [doi: 10.1145/3065386]
[38] Sandler M, Howard A, Zhu ML, Zhmoginov A, Chen LC. Mobilenetv2: Inverted residuals and linear bottlenecks. In: Proc. of the
CVPR. IEEE Computer Society, 2018. 4510−4520. [doi: 10.1109/CVPR.2018.00474]
[39] Ghasemzadeh M, Samragh M, Koushanfar F. ReBNet: Residual binarized neural network. In: Proc. of the FCCM. IEEE Computer
Society, 2018. 57−64. [doi: 10.1109/FCCM.2018.00018]
[40] Courbariaux M, Bengio Y, David JP. Binaryconnect: Training deep neural networks with binary weights during propagations. In:
Proc. of the NIPS 2015. 2015. 3123−3131.
[41] Alemdar H, Leroy V, Prost-Boucle A, Petro F. Ternary neural networks for resource-efficient AI applications. In: Proc. of the
IJCNN. IEEE, 2017. 2547−2554. [doi: 10.1109/IJCNN.2017.7966166]
[42] Esser SK, Appuswamy R, Merolla P, Arthur JV, Modha DS. Backpropagation for energy-efficient neuromorphic computing. In:
Cortes C, Lawrence ND, Lee DD, Sugiyama M, Garnett R, eds. Proc. of the NIPS. 2015. 1117−1125.
[43] Leng C, Dou Z, Li H, Zhu S, Jin R. Extremely low bit neural network: squeeze the last bit out with ADMM. In: McIlraith SA,
Weinberger KQ, eds. Proc. of the AAAI. AAAI Press, 2018. 3466−3473.
[44] Lin ZH, Courbariaux M, Memisevic R, Bengio Y. Neural networks with few multiplications. arXiv preprint arXiv:1510.03009,
2015.
[45] Wang W, Lai Q, Fu H, Shen J, Ling H. Salient object detection in the deep learning era: An in-depth survey. arXiv preprint
arXiv:1904.09146, 2016.
[46] Cheng M, Mitra NJ, Huang X, Torr PHS, Hu S. Global contrast based salient region detection. IEEE Trans. on Pattern Analysis and
Machine Intelligence, 2015,37(3):569−582. [doi: 10.1109/TPAMI.2014.2345401]