Page 458 - 《软件学报》2024年第6期
P. 458

3034                                                       软件学报  2024  年第  35  卷第  6  期


                 [19]  Inoue H. Data augmentation by pairing samples for images classification. arXiv:1801.02929, 2018.
                 [20]  Netzer Y, Wang T, Coates A, Bissacco A, Wu B, Ng AY. Reading digits in natural images with unsupervised feature learning. In: Proc.
                     of the 2011 NIPS Workshop on Deep Learning and Unsupervised Feature Learning. Granada: NIPS, 2011. 1–9.
                 [21]  Piczak KJ. ESC: Dataset for environmental sound classification. In: Proc. of the 23rd ACM Int’l Conf. on Multimedia. Brisbane: ACM,
                     2015. 1015–1018. [doi: 10.1145/2733373.2806390]
                 [22]  Tzanetakis G, Cook P. Musical genre classification of audio signals. IEEE Trans. on Speech and Audio Processing, 2002, 10(5): 293–302.
                     [doi: 10.1109/tsa.2002.800560]
                 [23]  Zhang X, Zhao JB, LeCun Y. Character-level convolutional networks for text classification. In: Proc. of the 28th Int’l Conf. on Neural
                     Information Processing Systems. Montréal: MIT Press, 2015. 649–657.
                 [24]  Auer S, Bizer C, Kobilarov G, Lehmann J, Cyganiak R, Ives Z. DBpedia: A nucleus for a web of open data. In: Proc. of the 6th Int’l
                     Semantic Web Conf. and the 2nd Asian Semantic Web Conf. Busan: Springer, 2007. 722–735. [doi: 10.1007/978-3-540-76298-0_52]
                 [25]  He KM, Zhang XY, Ren SQ, Sun J. Deep residual learning for image recognition. In: Proc. of the 2016 IEEE/CVF Conf. on Computer
                     Vision and Pattern Recognition (CVPR). Las Vegas: IEEE, 2016. 770–778. [doi: 10.1109/cvpr.2016.90]
                 [26]  Zagoruyko S, Komodakis N. Wide residual networks. In: Proc. of the 2016 British Machine Vision Conf. (BMVC). York: BMVA Press,
                     2016. 87.1–87.12. [doi: 10.5244/c.30.87]

                 [27]  Gastaldi X. Shake-shake regularization. arXiv:1705.07485, 2017.
                 [28]  Szegedy C, Liu W, Jia YQ, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going deeper with convolutions.
                     In: Proc. of the 2015 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). Boston: IEEE, 2015. 1–9. [doi: 10.1109/cvpr.
                     2015.7298594]
                 [29]  Kingma DP, Ba J. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
                 [30]  Huang G, Liu Z, van der Maaten L, Weinberger KQ. Densely connected convolutional networks. In: Proc. of the 2017 IEEE Conf. on
                     Computer Vision and Pattern Recognition (CVPR). Honolulu: IEEE, 2017. 2261–2269. [doi: 10.1109/cvpr.2017.243]
                 [31]  Joulin A, Grave É, Bojanowski P, Mikolov T. Bag of tricks for efficient text classification. In: Proc. of the 15th Conf. of the European
                     Chapter of the Association for Computational Linguistics, Vol. 2 (Short Papers). Valencia: Association for Computational Linguistics,
                     2017. 427–431. [doi: 10.18653/v1/e17-2068]
                 [32]  Chen YH. Convolutional neural network for sentence classification [MS. Thesis]. Waterloo: University of Waterloo, 2015.
                 [33]  Zhou P, Shi W, Tian J, Qi ZY, Li BC, Hao HW, Xu B. Attention-based bidirectional long short-term memory networks for relation
                     classification.  In:  Proc.  of  the  54th  Annual  Meeting  of  the  Association  for  Computational  Linguistics,  Vol.  2  (Short  Papers).  Berlin:
                     Association for Computational Linguistics, 2016. 207–212. [doi: 10.18653/v1/p16-2034]
                 [34]  Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I. Attention is all you need. In: Proc. of the
                     31th Int’l Conf. on Neural Information Processing Systems. Long Beach: Curran Associates Inc., 2017. 6000–6010.
                 [35]  Mania H, Guy A, Recht B. Simple random search provides a competitive approach to reinforcement learning. arXiv:1803.07055, 2018.
                 [36]  Feurer M, Klein A, Eggensperger K, Springenberg JT, Blum M, Hutter F. Efficient and robust automated machine learning. In: Proc. of
                     the 28th Int’l Conf. on Neural Information Processing Systems. Montréal: MIT Press, 2015. 2755–2763.
                 [37]  Thornton C, Hutter F, Hoos HH, Leyton-Brown K. Auto-WEKA: Combined selection and hyperparameter optimization of classification
                     algorithms. In: Proc. of the 19th ACM SIGKDD Int’l Conf. on Knowledge Discovery and Data Mining (KDD). Chicago: Association for
                     Computing Machinery, 2013. 847–855. [doi: 10.1145/2487575.2487629]
                 [38]  Qian  C.  Multiobjective  evolutionary  algorithms  are  still  good:  Maximizing  monotone  approximately  submodular  minus  modular
                     functions. Evolutionary Computation, 2021, 29(4): 463–490. [doi: 10.1162/evco_a_00288]

                  附录  A

                    表  A1–表  A3  分别为本文采用的图像、语音、文本数据增强策略.
                                       表 A1    13  种图像数据增强函数以及增强幅度取值范围

                       增强函数                              描述                            增强幅度取值范围
                       ShearX (Y)          以某个幅度沿X (Y)轴剪切图像(0.5的概率取反)                     [–0.3, 0.3]
                     TranslateX (Y)     以某个幅度在X (Y)轴方向上平移图像(0.5的概率取反)                     [–150, 150]
                        Rotate                以某个幅度旋转图像(0.5的概率取反)                         [–30, 30]
   453   454   455   456   457   458   459   460   461   462   463