Page 245 - 《软件学报》2025年第12期
P. 245

5626                                                      软件学报  2025  年第  36  卷第  12  期


                      Neural Information Processing Systems. Long Beach: Curran Associates Inc., 2017. 2319–2328.
                 [103]   Cohen WW. Tensorlog: A differentiable deductive database. arXiv:1605.06523, 2016.
                 [104]   Ghiasnezhad Omran P, Wang KW, Wang Z. Scalable rule learning via learning representation. In: Proc. of the 27th Int’l Joint Conf. on
                      Artificial Intelligence. Stockholm: AAAI, 2018. 2149–2155.
                 [105]   Rocktäschel T. Deep prolog: End-to-end differentiable proving in knowledge bases. In: Proc. of the 2nd Conf. on Artificial Intelligence
                      and Theorem Proving. Obergurgl: AITP, 2017. 37.
                 [106]   Meilicke C, Wudage Chekol M, Ruffinelli D, Stuckenschmidt H. Anytime bottom-up rule learning for knowledge graph completion. In:
                      Proc. of the 28th Int’l Joint Conf. on Artificial Intelligence. Macao: AAAI, 2019. 3137–3143.
                 [107]   Ma JT, Qiao YQ, Hu GW, Wang YJ, Zhang CQ, Huang YZ, Kumar Sangaiah A, Wu HG, Zhang HP, Ren K. ELPKG: A high-accuracy
                      link prediction approach for knowledge graph completion. Symmetry, 2019, 11(9): 1096. [doi: 10.3390/sym11091096]
                 [108]   Niu GL, Zhang YF, Li B, Cui P, Liu S, Li JY, Zhang XW. Rule-guided compositional representation learning on knowledge graphs. In:
                      Proc. of the 34th AAAI Conf. on Artificial Intelligence. New York: AAAI, 2020. 2950–2958. [doi: 10.1609/aaai.v34i03.5687]
                 [109]   Jiang TS, Liu TY, Ge T, Sha L, Li SJ, Chang BB, Sui ZF. Encoding temporal information for time-aware link prediction. In: Proc. of the
                      2016  Conf.  on  Empirical  Methods  in  Natural  Language  Processing.  Austin:  Association  for  Computational  Linguistics,  2016.
                      2350–2354. [doi: 10.18653/v1/D16-1260]
                 [110]   Ni RY, Ma ZG, Yu KH, Xu XH. Specific time embedding for temporal knowledge graph completion. In: Proc. of the 19th IEEE Int’l
                      Conf. on Cognitive Informatics & Cognitive Computing. Beijing: IEEE, 2020. 105–110. [doi: 10.1109/ICCICC50026.2020.9450214]
                 [111]   Dasgupta SS, Ray SN, Talukdar P. HyTE: Hyperplane-based temporally aware knowledge graph embedding. In: Proc. of the 2018 Conf.
                      on Empirical Methods in Natural Language Processing. Brussels: Association for Computational Linguistics, 2018. 2001–2011. [doi: 10.
                      18653/v1/D18-1225]
                 [112]   Tang  XL,  Yuan  R,  Li  QY,  Wang  TY,  Yang  HZ,  Cai  YD,  Song  HJ.  Timespan-aware  dynamic  knowledge  graph  embedding  by
                      incorporating temporal evolution. IEEE Access, 2020, 8: 6849–6860. [doi: 10.1109/ACCESS.2020.2964028]
                 [113]   Wang XZ, Gao TY, Zhu ZC, Zhang ZY, Liu ZY, Li JZ, Tang J. KEPLER: A unified model for knowledge embedding and pre-trained
                      language representation. Trans. of the Association for Computational Linguistics, 2021, 9: 176–194. [doi: 10.1162/tacl_a_00360]
                 [114]   Liu  S,  Qin  YF,  Xu  M,  Kolmanič  S.  Knowledge  graph  completion  with  triple  structure  and  text  representation.  Int’l  Journal  of
                      Computational Intelligence Systems, 2023, 16(1): 95. [doi: 10.1007/s44196-023-00271-0]
                 [115]   Liu WJ, Zhou P, Zhao Z, Wang ZR, Ju Q, Deng HT, Wang P. K-BERT: Enabling language representation with knowledge graph. In:
                      Proc. of the 34th AAAI Conf. on Artificial Intelligence. New York: AAAI, 2020. 2901–2908. [doi: 10.1609/aaai.v34i03.5681]
                 [116]   Wang M, Wang S, Yang H, Zhang Z, Chen X, Qi GL. Is visual context really helpful for knowledge graph? A representation learning
                      perspective. In: Proc. of the 29th ACM Int’l Conf. on Multimedia. ACM, 2021. 2735–2743. [doi: 10.1145/3474085.3475470]
                 [117]   Zhang X, Liang X, Zheng XP, Wu B, Guo YH. MULTIFORM: Few-shot knowledge graph completion via multi-modal contexts. In:
                      Proc. of 2022 the European Conf. on Machine Learning and Knowledge Discovery in Databases. Grenoble: Springer, 2022. 172–187.
                      [doi: 10.1007/978-3-031-26390-3_11]
                 [118]   Wei YY, Chen W, Zhang XF, Zhao PP, Qu JF, Zhao L. Multi-modal siamese network for few-shot knowledge graph completion. In:
                      Proc. of the 40th Int’l Conf. on Data Engineering. Utrecht: IEEE, 2024. 719–732. [doi: 10.1109/ICDE60146.2024.00061]
                 [119]   Liang S, Zhu AJ, Zhang JS, Shao J. Hyper-node relational graph attention network for multi-modal knowledge graph completion. ACM
                      Trans. on Multimedia Computing, Communications and Applications, 2023, 19(2): 62. [doi: 10.1145/3545573]
                 [120]   Zhang YC, Chen Z, Zhang W. MACO: A modality adversarial and contrastive framework for modality-missing multi-modal knowledge
                      graph completion. In: Proc. of the 12th National CCF Conf. on Natural Language Processing and Chinese Computing. Foshan: Springer,
                      2023. 123–134. [doi: 10.1007/978-3-031-44693-1_10]
                 [121]   Zhang YC, Chen Z, Liang L, Chen HJ, Zhang W. Unleashing the power of imbalanced modality information for multi-modal knowledge
                      graph completion. In: Proc. of the 2024 Joint Int’l Conf. on Computational Linguistics, Language Resources and Evaluation (LREC-
                      COLING 2024). Torino: ELRA, ICCL, 2024. 17120–17130.
                 [122]   Zhang YC, Chen Z, Guo LB, Xu YJ, Hu BB, Liu ZQ, Zhang W, Chen HJ. NativE: Multi-modal knowledge graph completion in the
                      wild. In: Proc. of the 47th Int’l ACM SIGIR Conf. on Research and Development in Information Retrieval. Washington: ACM, 2024.
                      91–101. [doi: 10.1145/3626772.3657800]
                 [123]   Zhao Y, Cai XR, Wu YK, Zhang HW, Zhang Y, Zhao GQ, Jiang N. MoSE: Modality split and ensemble for multimodal knowledge
                      graph  completion.  In:  Proc.  of  the  2022  Conf.  on  Empirical  Methods  in  Natural  Language  Processing.  Abu  Dhabi:  Association  for
                      Computational Linguistics, 2022. 10527–10536. [doi: 10.18653/v1/2022.emnlp-main.719]
                 [124]   Wang YP, Ning B, Wang X, Li GY. Multi-hop neighbor fusion enhanced hierarchical Transformer for multi-modal knowledge graph
                      completion. World Wide Web (WWW), 2024, 27(1). [doi: 10.1007/s11280-024-01289-w]
   240   241   242   243   244   245   246   247   248   249   250