Page 302 - 《软件学报》2024年第6期
P. 302

2878                                                       软件学报  2024  年第  35  卷第  6  期


                 [16]  Allamanis M, Brockschmidt M, Khademi M. Learning to represent programs with graphs. In: Proc. of the 6th Int’l Conf. on Learning
                     Representations. Vancouver: OpenReview.net, 2018.
                 [17]  Jurafsky D, Martin JH. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics,
                     and Speech Recognition. 2nd ed., Upper Saddle River: Pearson Prentice Hall, 2009.
                 [18]  Kipf TN, Welling M. Semi-supervised classification with graph convolutional networks. In: Proc. of the 5th Int’l Conf. on Learning
                     Representations. Toulon: OpenReview.net, 2017.
                 [19]  Schlichtkrull M, Kipf TN, Bloem P, van den Berg, Titov I, Welling M. Modeling relational data with graph convolutional networks. In:
                     Proc. of the 15th European Semantic Web Conf. Heraklion: Springer, 2018. 593–607. [doi: 10.1007/978-3-319-93417-4_38]
                 [20]  Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I. Attention is all you need. In: Proc. of the
                     31st Int’l Conf. on Neural Information Processing Systems. Long Beach: Curran Associates Inc., 2017. 6000–6010.
                 [21]  Socher R, Chen DQ, Manning CD, Ng AY. Reasoning with neural tensor networks for knowledge base completion. In: Proc. of the 26th
                     Int’l Conf. on Neural Information Processing Systems. Lake: Curran Associates Inc., 2013. 926–934.
                 [22]  Li HY, Kim S, Chandra S. Neural code search evaluation dataset. arXiv:1908.09804, 2019.
                 [23]  Husain H, Wu HH, Gazit T, Allamanis M, Brockschmidt M. CodeSearchNet challenge: Evaluating the state of semantic code search.
                     arXiv:1909.09436, 2019.
                     Int’l Conf. on Learning Representations. OpenReview.net, 2021.
                 [24]  Ling X, Wu LF, Wang SZ, Pan GN, Ma TF, Xu FL, Liu AX, Wu CM, Ji SL. Deep graph matching and searching for semantic code
                     retrieval. ACM Trans. on Knowledge Discovery from Data, 2021, 15(5): 88. [doi: 10.1145/3447571]
                 [25]  Manning C, Surdeanu M, Bauer J, Finkel J, Bethard S, MClosky D. The Stanford CoreNLP natural language processing toolkit. In: Proc.
                     of the 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Baltimore: ACL, 2014. 55–60.
                     [doi: 10.3115/v1/P14-5010]
                 [26]  Fernandes P, Allamanis M, Brockschmidt M. Structured neural summarization. In: Proc. of the 7th Int’l Conf. on Learning Representa-
                     tions. New Orleans: OpenReview.net, 2019.
                 [27]  Cvitkovic M, Singh B, Anandkumar A. Open vocabulary learning on source code with a graph-structured cache. In: Proc. of the 36th Int’l
                     Conf. on Machine Learning. Long Beach: PMLR, 2019. 1475–1485.
                 [28]  Bromley J, Bentz JW, Bottou L, Guyon I, Lecun Y, Moore C, Säckinger E, Shah R. Signature verification using a “Siamese” time delay
                     neural network. Int’l Journal of Pattern Recognition and Artificial Intelligence, 1993, 7(4): 669–688. [doi: 10.1142/S0218001493000339]
                 [29]  Pennington J, Socher R, Manning C. GloVe: Global vectors for word representation. In: Proc. of the 2014 Conf. on Empirical Methods in
                     Natural Language Processing. Doha: ACL, 2014. 1532–1543. [doi: 10.3115/v1/D14-1162]
                 [30]  Kingma DP, Ba LJ. Adam: A method for stochastic optimization. In: Proc. of the 3rd Int’l Conf. on Learning Representations. San Diego:
                     ICLR, 2015. 7–9.
                 [31]  Cambronero J, Li HY, Kim S, Sen K, Chandra S. When deep learning met code search. In: Proc. of the 27th ACM Joint Meeting on
                     European Software Engineering Conf. and Symp. on the Foundations of Software Engineering. Tallinn: ACM, 2019. 964–974. [doi: 10.
                     1145/3338906.3340458]
                 [32]  Sun YF, Cheng CM, Zhang YH, Zhang C, Zheng L, Wang ZD, Wei YC. Circle loss: A unified perspective of pair similarity optimization.
                     In: Proc. of the 2020 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020. 6397–6406. [doi: 10.1109/
                     CVPR42600.2020.00643]
                 [33]  Li J, Li YM, Li G, Hu X, Xia X, Jin Z. EditSum: A retrieve-and-edit framework for source code summarization. In: Proc. of the 36th
                     IEEE/ACM Int’l Conf. on Automated Software Engineering. Melbourne: IEEE, 2021. 155–166. [doi: 10.1109/ASE51524.2021.9678724]
                 [34]  Liu SQ, Chen Y, Xie XF, Siow JK, Liu Y. Retrieval-augmented generation for code summarization via hybrid GNN. In: Proc. of the 9th


                 [35]  Zhang J, Wang X, Zhang HY, Sun HL, Liu XD. Retrieval-based neural source code summarization. In: Proc. of the 42nd IEEE/ACM Int’l
                     Conf. on Software Engineering. Seoul: IEEE, 2020. 1385–1397. [doi: 10.1145/3377811.3380383]
                 [36]  Ahmad WU, Chakraborty S, Ray B, Chang KW. A transformer-based approach for source code summarization. In: Proc. of the 58th
                     Annual Meeting of the Association for Computational Linguistics. ACL, 2020. 4998–5007. [doi: 10.18653/v1/2020.acl-main.449]
                 [37]  Brockschmidt M, Allamanis M, Gaunt AL, Polozov O. Generative code modeling with graphs. In: Proc. of the 7th Int’l Conf. on Learning
                     Representations. New Orleans: OpenReview.net, 2019.
                 [38]  Zhong  RQ,  Stern  M,  Klein  D.  Semantic  scaffolds  for  pseudocode-to-code  generation.  In:  Proc.  of  the  58th  Annual  Meeting  of  the
                     Association for Computational Linguistics. ACL, 2020. 2283–2295. [doi: 10.18653/v1/2020.acl-main.208]
                 [39]  Sun ZY, Zhu QH, Xiong YF, Sun YC, Mou LL, Zhang L. TreeGen: A tree-based Transformer architecture for code generation. In: Proc.
                     of the 34th AAAI Conf. on Artificial Intelligence. New York: AAAI Press, 2020. 8984–8991.
   297   298   299   300   301   302   303   304   305   306   307