Page 261 - 《软件学报》2025年第12期
P. 261

5642                                                      软件学报  2025  年第  36  卷第  12  期


                 [12]   Zhang TC, Tian X, Sun XH, Yu MH, Sun YH, Yu G. Overview on knowledge graph embedding technology research. Ruan Jian Xue
                     Bao/Journal of Software, 2023, 34(1): 277–311 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6429.htm [doi: 10.
                     13328/j.cnki.jos.006429]
                 [13]   Wang Z, Zhang JW, Feng JL, Chen Z. Knowledge graph embedding by translating on hyperplanes. In: Proc. of the 28th AAAI Conf. on
                     Artificial Intelligence. Québec: AAAI Press, 2014. 1112–1119.
                 [14]   Trouillon T, Welbl J, Riedel S, Gaussier É, Bouchard G. Complex embeddings for simple link prediction. In: Proc. of the 33rd Int’l Conf.
                     on Machine Learning. New York: JMLR.org, 2016. 2071–2080.
                 [15]   Nickel M, Tresp V, Kriegel HP. A three-way model for collective learning on multi-relational data. In: Proc. of the 28th Int’l Conf. on
                     Machine Learning. Bellevue: Omnipress, 2011. 809–816.
                 [16]   Yang  BS,  Yih  WT,  He  XD,  Gao  JF,  Deng  L.  Embedding  entities  and  relations  for  learning  and  inference  in  knowledge  bases.
                     arXiv:1412.6575, 2014.
                 [17]   Schlichtkrull M, Kipf TN, Bloem P, van den Berg R, Titov I, Welling M. Modeling relational data with graph convolutional networks. In:
                     Proc. of the 15th Int’l Conf. on the Semantic Web. Heraklion: Springer, 2018. 593–607. [doi: 10.1007/978-3-319-93417-4_38]
                 [18]   Vashishth S, Sanyal S, Nitin V, Talukdar P. Composition-based multi-relational graph convolutional networks. arXiv:1911.03082, 2020.
                 [19]   Xie RB, Liu ZY, Jia J, Luan HB, Sun MS. Representation learning of knowledge graphs with entity descriptions. In: Proc. of the 30th
                     AAAI Conf. on Artificial Intelligence. Phoenix: AAAI Press, 2016. 2659–2665. [doi: 10.1609/aaai.v30i1.10329]
                 [20]   Yao L, Mao CS, Luo Y. KG-BERT: BERT for knowledge graph completion. arXiv:1909.03193, 2019.
                 [21]   Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of deep bidirectional Transformers for language understanding. In: Proc.
                     of the 2019 Conf. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
                     Minneapolis: ACL, 2019. 4171–4186. [doi: 10.18653/v1/N19-1423]
                 [22]   Teru KK, Denis EG, Hamilton WL. Inductive relation prediction by subgraph reasoning. In: Proc. of the 37th Int’l Conf. on Machine
                     Learning. Vienna: PMLR, 2020. 9448–9457.
                 [23]   Zha HW, Chen ZY, Yan XF. Inductive relation prediction by BERT. In: Proc. of the 36th AAAI Conf. on Artificial Intelligence. Palo
                     Alto: AAAI Press, 2022. 5923–5931. [doi: 10.1609/aaai.v36i5.20537]
                 [24]   Geng YX, Chen JY, Pan JZ, Chen MY, Jiang S, Zhang W. Relational message passing for fully inductive knowledge graph completion.
                     In: Proc. of the 39th IEEE Int’l Conf. on Data Engineering. Anaheim: IEEE, 2023. 1221–1233. [doi: 10.1109/ICDE55515.2023.00098]
                 [25]   He KM, Zhang XY, Ren SQ, Sun J. Deep residual learning for image recognition. In: Proc. of the 2016 IEEE Conf. on Computer Vision
                     and Pattern Recognition. Las Vegas: IEEE, 2016. 770–778. [doi: 10.1109/CVPR.2016.90]
                 [26]   Ba JL, Kiros JR, Hinton GE. Layer normalization. arXiv:1607.06450, 2016.
                 [27]   Lovász L. Random walks on graphs: A survey. 1993. https://cs.yale.edu/publications/techreports/tr1029.pdf
                 [28]   Su JL, Ahmed M, Lu Y, Pan SF, Bo W, Liu YF. RoFormer: Enhanced Transformer with rotary position embedding. Neurocomputing,
                     2024, 568: 127063. [doi: 10.1016/j.neucom.2023.127063]
                 [29]   Chen T, Kornblith S, Norouzi M, Hinton G. A simple framework for contrastive learning of visual representations. In: Proc. of the 37th
                     Int’l Conf. on Machine Learning. Vienna: PMLR, 2020. 1597–1607.
                 [30]   Bollacker  K,  Evans  C,  Paritosh  P,  Sturge  T,  Taylor  J.  Freebase:  A  collaboratively  created  graph  database  for  structuring  human
                     knowledge. In: Proc. of the 2008 ACM SIGMOD Int’l Conf. on Management of Data. Vancouver: ACM, 2008. 1247–1250. [doi: 10.1145/
                     1376616.1376746]
                 [31]   Toutanova K, Chen DQ, Pantel P, Poon H, Choudhury P, Gamon M. Representing text for joint embedding of text and knowledge bases.
                     In: Proc. of the 2015 Conf. on Empirical Methods in Natural Language Processing. Lisbon: Association for Computational Linguistics,
                     2015. 1499–1509. [doi: 10.18653/v1/D15-1174]
                 [32]   Pennington J, Socher R, Manning C. GloVe: Global vectors for word representation. In: Proc. of the 2014 Conf. on Empirical Methods in
                     Natural Language Processing. Doha: ACL, 2014. 1532–1543. [doi: 10.3115/v1/D14-1162]
                 [33]   Balažević  I,  Allen  C,  Hospedales  T.  TuckER:  Tensor  factorization  for  knowledge  graph  completion.  In:  Proc.  of  the  2019  Conf.  on
                     Empirical Methods in Natural Language Processing and the 9th Int’l Joint Conf. on Natural Language Processing. Hong Kong: ACL,
                     2019. 5185–5194. [doi: 10.18653/v1/D19-1522]
                 [34]   Zhu YQ, Wang XH, Chen J, Qiao SF, Ou YX, Yao YZ, Deng SM, Chen HJ, Zhang NY. LLMs for knowledge graph construction and
                     reasoning: Recent capabilities and future opportunities. arXiv:2305.13168, 2023.
                 [35]   Bhargava P, Drozd A, Rogers A. Generalization in NLI: Ways (not) to go beyond simple heuristics. arXiv:2110.01518, 2021.
                 [36]   Turc  I,  Chang  MW,  Lee  K,  Toutanova  K.  Well-read  students  learn  better:  On  the  importance  of  pre-training  compact  models.
                     arXiv:1908.08962, 2019.
   256   257   258   259   260   261   262   263   264   265   266