Page 189 - 《软件学报》2025年第10期
P. 189

4586                                                      软件学报  2025  年第  36  卷第  10  期


                     defect prediction models. IEEE Trans. on Software Engineering, 2020, 46(11): 1200–1219. [doi: 10.1109/TSE.2018.2876537]
                 [17]   Catolino G, Di Nucci D, Ferrucci F. Cross-project just-in-time bug prediction for mobile APPs: An empirical assessment. In: Proc. of the
                     6th IEEE/ACM Int’l Conf. on Mobile Software Engineering and Systems (MOBILESoft). Montreal: IEEE, 2019: 99–110. [doi: 10.1109/
                     MOBILESoft.2019.00023]
                 [18]   Bird C, Bachmann A, Aune E, Duffy J, Bernstrin A, Filkov V, Devanbu P. Fair and balanced? Bias in bug-fix datasets. In: Proc. of the
                     7th  Joint  Meeting  of  the  European  Software  Engineering  Conf.  and  the  ACM  SIGSOFT  Symp.  on  the  Foundations  of  Software
                     Engineering. Amsterdam: ACM, 2009. 121–130. [doi: 10.1145/1595696.1595716]
                 [19]   Bachmann A, Bird C, Rahman F, Devanbu P, Bernstein A. The missing links: Bugs and bug-fix commits. In: Proc. of the 18th ACM
                     SIGSOFT Int’l Symp. on Foundations of Software Engineering. Santa Fe: ACM, 2010. 97–106. [doi: 10.1145/1882291.1882308]
                 [20]   Kim S, Zhang HY, Wu RX, Gong L. Dealing with noise in defect prediction. In: Proc. of the 33rd Int’l Conf. on Software Engineering.
                     Honolulu: ACM, 2011. 481–490. [doi: 10.1145/1985793.1985859]
                 [21]   Antoniol G, Ayari K, Di Penta M, Khomh F, Guéhéneuc YG. Is it a bug or an enhancement? A text-based approach to classify change
                     requests. In: Proc. of the 2008 Conf. of the Center for Advanced Studies on Collaborative Research: Meeting of Minds. Ontario: ACM,
                     2008. 304–318. [doi: 10.1145/1463788.1463819]
                 [22]   Kochhar PS, Thung F, Lo D. Automatic fine-grained issue report reclassification. In: Proc. of the 19th Int’l Conf. on Engineering of
                     Complex Computer Systems. Tianjin: IEEE, 2014. 126–135. [doi: 10.1109/ICECCS.2014.25]
                 [23]   Herzig K, Just S, Zeller A. It’s not a bug, it’s a feature: How misclassification impacts bug prediction. In: Proc. of the 35th Int’l Conf. on
                     Software Engineering (ICSE). San Francisco: IEEE, 2013. 392–401. [doi: 10.1109/ICSE.2013.6606585]
                 [24]   Tantithamthavorn C, McIntosh S, Hassan AE, Ihara A, Mastsumoto K. The impact of mislabelling on the performance and interpretation
                     of defect prediction models. In: Proc. of the 37th IEEE/ACM IEEE Int’l Conf. on Software Engineering. Florence: IEEE, 2015. 812–823.
                     [doi: 10.1109/ICSE.2015.93]
                 [25]   Herzig K, Just S, Zeller A. The impact of tangled code changes on defect prediction models. Empirical Software Engineering, 2016,
                     21(2): 303–336. [doi: 10.1007/s10664-015-9376-6]
                 [26]   Zimmermann T, Kim S, Zeller A, Whitehead EJ. Mining version archives for co-changed lines. In: Proc. of the 2006 Int’l Workshop on
                     Mining Software Repositories. Shanghai: ACM, 2006. 72–75. [doi: 10.1145/1137983.1138001]
                 [27]   Shivaji S, Whitehead EJ, Akella R, Kim S. Reducing features to improve code change-based bug prediction. IEEE Trans. on Software
                     Engineering, 2013, 39(4): 552–569. [doi: 10.1109/TSE.2012.43]
                 [28]   Koru AG, Zhang DS, El Emam K, Liu HF. An investigation into the functional form of the size-defect relationship for software modules.
                     IEEE Trans. on Software Engineering, 2009, 35(2): 293–304. [doi: 10.1109/TSE.2008.90]
                 [29]   Mockus A, Weiss DM. Predicting risk of software changes. Bell Labs Technical Journal, 2002, 5(2): 169–180. [doi: 10.1002/bltj.2229]
                 [30]   Kim S, Whitehead EJ, Zhang Y. Classifying software changes: Clean or buggy? IEEE Trans. on Software Engineering, 2008, 34(2):
                     181–196. [doi: 10.1109/TSE.2007.70773]
                 [31]   Shihab E, Hassan AE, Adams B, Jiang ZM. An industrial study on the risk of software changes. In: Proc. of the 20th ACM SIGSOFT Int’l
                     Symp. on the Foundations of Software Engineering. Cary: ACM, 2012. 62. [doi: 10.1145/2393596.2393670]
                 [32]   Arisholm E, Briand LC, Fuglerud M. Data mining techniques for building fault-proneness models in telecom Java software. In: Proc. of
                     the 18th IEEE Int’l Symp. on Software Reliability (ISSRE 2007). Trollhattan: IEEE, 2007. 215–224. [doi: 10.1109/ISSRE.2007.22]
                 [33]   Mende  T,  Koschke  R.  Revisiting  the  evaluation  of  defect  prediction  models.  In:  Proc.  of  the  5th  Int’l  Conf.  on  Predictor  Models  in
                     Software Engineering. Vancouver: ACM, 2009. 7. [doi: 10.1145/1540438.1540448]
                 [34]   Yang  YB,  Zhou  YM,  Liu  JP,  Zhao  YY,  Lu  HM,  Xu  L,  Xu  BW,  Leung  H.  Effort-aware  just-in-time  defect  prediction:  Simple
                     unsupervised  models  could  be  better  than  supervised  models.  In:  Proc.  of  the  24th  ACM  SIGSOFT  Int’l  Symp.  on  Foundations  of
                     Software Engineering. Seattle: ACM, 2016. 157–168. [doi: 10.1145/2950290.2950353]
                 [35]   Liu JP, Zhou YM, Yang YB, Lu HM, Xu BW. Code churn: A neglected metric in effort-aware just-in-time defect prediction. In: Proc. of
                     the 2017 ACM/IEEE Int’l Symp. on Empirical Software Engineering and Measurement (ESEM). Toronto: IEEE, 2017. 11–19. [doi: 10.
                     1109/ESEM.2017.8]
                 [36]   Fu W, Menzies T. Revisiting unsupervised learning for defect prediction. In: Proc. of the 11th Joint Meeting on Foundations of Software
                     Engineering. Paderborn: ACM, 2017. 72–83. [doi: 10.1145/3106237.3106257]
                 [37]   Huang  Q,  Xia  X,  Lo  D.  Revisiting  supervised  and  unsupervised  models  for  effort-aware  just-in-time  defect  prediction.  Empirical
                     Software Engineering, 2019, 24(5): 2823–2862. [doi: 10.1007/s10664-018-9961-2]
                 [38]   Chen X, Zhao YQ, Wang QP, Yuan ZD. MULTI: Multi-objective effort-aware just-in-time software defect prediction. Information and
                     Software Technology, 2018, 93: 1–13. [doi: 10.1016/j.infsof.2017.08.004]
                 [39]   Li  WW,  Zhang  WZ,  Jia  XY,  Huang  ZQ.  Effort-aware  semi-supervised  just-in-time  defect  prediction.  Information  and  Software
   184   185   186   187   188   189   190   191   192   193   194