Page 217 - 《软件学报》2024年第6期
P. 217

杨岚心 等: 基于多标签学习的代码评审意见质量评价                                                       2793


                     SEKE2017-039]
                 [30]  Ortu M, Adams B, Destefanis G, Tourani P, Marchesi M, Tonelli R. Are bullies more productive? Empirical study of affectiveness vs.
                     issue fixing time. In: Proc. of the 12th IEEE/ACM Working Conf. on Mining Software Repositories. Florence: IEEE, 2015. 303–313.
                     [doi: 10.1109/MSR.2015.35]
                 [31]  Efstathiou V, Spinellis D. Code review comments: Language matters. In: Proc. of the 40th Int’l Conf. on Software Engineering: New
                     Ideas and Emerging Results. Gothenburg: ACM, 2018. 69–72. [doi: 10.1145/3183399.3183411]
                 [32]  Denzin NK. Triangulation 2.0. Journal of Mixed Methods Research, 2012, 6(2): 80–88. [doi: 10.1177/1558689812437186]
                                              TM
                 [33]  IEEE Computer Society. IEEE Std 1028 —2008 IEEE standard for software reviews and audits. New York: IEEE, 2008. 1–53. [doi: 10.
                     1109/IEEESTD.2008.4601584]
                 [34]  Tricco AC, Antony J, Zarin W, Strifler L, Ghassemi M, Ivory J, Perrier L, Hutton B, Moher D, Straus SE. A scoping review of rapid
                     review methods. BMC Medicine, 2015, 13(1): 224. [doi: 10.1186/s12916-015-0465-6]
                 [35]  Davila N, Nunes I. A systematic literature review and taxonomy of modern code review. Journal of Systems and Software, 2021, 177:
                     110951. [doi: 10.1016/j.jss.2021.110951]
                 [36]  Wang  D,  Ueda  Y,  Kula  RG,  Ishio  T,  Matsumoto  K.  Can  we  benchmark  code  review  studies?  A  systematic  mapping  study  of
                     methodology, dataset, and metric. Journal of Systems and Software, 2021, 180: 111009. [doi: 10.1016/j.jss.2021.111009]
                 [37]  Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proc.
                     of the 2019 Conf. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
                     Minneapolis: ACL, 2019. 4171–4186. [doi: 10.18653/v1/n19-1423]
                 [38]  Liu PF, Qiu XP, Huang XJ. Recurrent neural network for text classification with multi-task learning. In: Proc. of the 25th Int’l Joint Conf.
                     on Artificial Intelligence. New York: AAAI, 2016. 2873–2879.
                 [39]  Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I. Attention is all you need. In: Proc. of the
                     31st Int’l Conf. on Neural Information Processing Systems. Long Beach: Curran Associates Inc., 2017. 6000–6010.
                 [40]  Gururangan S, Marasović A, Swayamdipta S, Lo K, Beltagy I, Downey D, Smith NA. Don’t stop pretraining: Adapt language models to
                     domains and tasks. In: Proc. of the 58th Annual Meeting of the Association for Computational Linguistics. ACL, 2020. 8342–8360. [doi:
                     10.18653/v1/2020.acl-main.740]
                 [41]  Ridnik T, Ben-Baruch E, Zamir N, Noy A, Friedman I, Protter M, Zelnik-Manor L. Asymmetric loss for multi-label classification. In:
                     Proc. of the 2021 IEEE/CVF Int’l Conf. on Computer Vision. Montreal: IEEE, 2021. 82–91. [doi: 10.1109/ICCV48922.2021.00015]
                     Miyato  T,  Maeda  SI,  Koyama  M,  Ishii  S.  Virtual  adversarial  training:  A  regularization  method  for  supervised  and  semi-supervised
                 [42] JMLR.org, 2017. 3780–3788.
                     learning. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2019, 41(8): 1979–1993. [doi: 10.1109/TPAMI.2018.2858821]
                 [43]  Zhang WE, Sheng QZ, Alhazmi A, Li CL. Adversarial attacks on deep-learning models in natural language processing: A survey. ACM
                     Trans. on Intelligent Systems and Technology, 2020, 11(3): 24. [doi: 10.1145/3374217]
                 [44]  Kim Y. Convolutional neural networks for sentence classification. In: Proc. of the 2014 Conf. on Empirical Methods in Natural Language
                     Processing. Doha: ACL, 2014. 1746–1751. [doi: 10.3115/v1/d14-1181]
                 [45]  Lai SW, Xu LH, Liu K, Zhao J. Recurrent convolutional neural networks for text classification. In: Proc. of the 29th AAAI Conf. on
                     Artificial Intelligence. Austin: AAAI, 2015. 2267–2273. [doi: 10.1609/aaai.v29i1.9513]
                 [46]  Joulin A, Grave E, Bojanowski P, Mikolov T. Bag of tricks for efficient text classification. In: Proc. of the 15th Conf. of the European
                     Chapter of the Association for Computational Linguistics. Valencia: ACL, 2017. 427–431. [doi: 10.18653/v1/e17-2068]
                 [47]  Johnson R, Zhang T. Deep pyramid convolutional neural networks for text categorization. In: Proc. of the 55th Annual Meeting of the
                     Association for Computational Linguistics. Vancouver: ACL, 2017. 562–570. [doi: 10.18653/v1/P17-1052]
                 [48]  Wu XZ, Zhou ZH. A unified view of multi-label performance measures. In: Proc. of the 34th Int’l Conf. on Machine Learning. Sydney:


                 [49]  Ahmed T, Bosu A, Iqbal A, Rahimi S. SentiCR: A customized sentiment analysis tool for code review interactions. In: Proc. of the 32nd
                     IEEE/ACM Int’l Conf. on Automated Software Engineering. Urbana: IEEE, 2017. 106–111. [doi: 10.1109/ASE.2017.8115623]
                 [50]  Jongeling R, Sarkar P, Datta S, Serebrenik A. On negative results when using sentiment analysis tools for software engineering research.
                     Empirical Software Engineering, 2017, 22(5): 2543–2584. [doi: 10.1007/s10664-016-9493-x]
                 [51]  Pandey R, Purohit H, Castillo C, Shalin VL. Modeling and mitigating human annotation errors to design efficient stream processing
                     systems with human-in-the-loop machine learning. Int’l Journal of Human-computer Studies, 2022, 160: 102772. [doi: 10.1016/j.ijhcs.
                     2022.102772]
   212   213   214   215   216   217   218   219   220   221   222