Page 273 - 《软件学报》2025年第9期
P. 273
4184 软件学报 2025 年第 36 卷第 9 期
Information Processing, 2016, 30(3): 125–132 (in Chinese with English abstract).
[26] Wang Y. Research on commonsense knowledge acquisition [Ph.D. Thesis]. Beijing: University of Chinese Academy of Sciences, 2022
(in Chinese with English abstract).
[27] Deng P. A study of event graph completion and reasoning based on semantic enhancement [MS. Thesis]. Beijing: Institute of Computer
Technology, Chinese Academy of Sciences, 2023 (in Chinese with English abstract).
[28] Meilicke C, Fink M, Wang YJ, Ruffinelli D, Gemulla R, Stuckenschmidt H. Fine-grained evaluation of rule- and embedding-based
systems for knowledge graph completion. In: Proc. of the 17th Int’l Semantic Web Conf. Monterey: Springer, 2018. 3–20. [doi: 10.1007/
978-3-030-00671-6_1]
[29] Li X, Taheri A, Tu LF, Gimpel K. Commonsense knowledge base completion. In: Proc. of the 54th Annual Meeting of the Association
for Computational Linguistics. Berlin: ACM, 2016. 1445–1455. [doi: 10.18653/v1/P16-1137]
[30] Malaviya C, Bhagavatula C, Bosselut A, Choi Y. Commonsense knowledge base completion with structural and semantic context. In:
Proc. of the 34th AAAI Conf. on Artificial Intelligence. New York: AAAI, 2020. 2925–2933. [doi: 10.1609/aaai.v34i03.5684]
[31] Niu GL, Li B. Logic and commonsense-guided temporal knowledge graph completion. In: Proc. of the 37th AAAI Conf. on Artificial
Intelligence. Washington: AAAI, 2023. 4569–4577. [doi: 10.1609/aaai.v37i4.25579]
[32] Wang B, Wang GT, Huang J, You JX, Leskovec J, Jay Kuo CC. Inductive learning on commonsense knowledge graph completion. In:
Proc. of the 2021 Int’l Joint Conf. on Neural Networks. Shenzhen: IEEE, 2021. 1–8. [doi: 10.1109/IJCNN52387.2021.9534355]
[33] Bosselut A, Le Bras R, Choi Y. Dynamic neuro-symbolic knowledge graph construction for zero-shot commonsense question answering.
In: Proc. of the 35th AAAI Conf. on Artificial Intelligence. AAAI, 2021. 4923–4931. [doi: 10.1609/aaai.v35i6.16625]
[34] Zhang NY, Xie X, Chen X, Deng SM, Ye HB, Chen HJ. Knowledge collaborative fine-tuning for low-resource knowledge graph
completion. Ruan Jian Xue Bao/Journal of Software, 2022, 33(10): 3531–3545 (in Chinese with English abstract). http://www.jos.org.cn/
1000-9825/6628.htm [doi: 10.13328/j.cnki.jos.006628]
[35] Zhang YC, Chen Z, Guo LB, Xu YJ, Zhang W, Chen HJ. Making large language models perform better in knowledge graph completion.
arXiv:2310.06671, 2023.
[36] Wei YB, Huang QS, Zhang Y, Kwok JT. KICGPT: Large language model with knowledge in context for knowledge graph completion.
In: Proc. of the 2024 Findings of the Association for Computational Linguistics. Singapore: ACL, 2024. 8667–8683. [doi: 10.18653/v1/
2023.findings-emnlp.580]
[37] Luo RL, Gu TL, Li HL, Li JZ, Lin ZC, Li JY, Yang YJ. Chain of history: Learning and forecasting with LLMs for temporal knowledge
graph completion. arXiv:2401.06072, 2024.
[38] Yao L, Peng JZ, Mao CS, Luo Y. Exploring large language models for knowledge graph completion. arXiv:2308.13916, 2023.
[39] Pan SR, Luo LH, Wang YF, Chen C, Wang JP, Wu XD. Unifying large language models and knowledge graphs: A roadmap. IEEE Trans.
on Knowledge and Data Engineering, 2024, 36(7): 3580–3599. [doi: 10.1109/TKDE.2024.3352100]
[40] Li DW, Tan Z, Chen TL, Liu H. Contextualization distillation from large language model for knowledge graph completion. In: Proc. of
the 2024 Findings of the Association for Computational Linguistics. St. Julian’s: ACL, 2024. 458–477.
[41] OpenAI. GPT-4 technical report. arXiv:2303.08774, 2023.
[42] Liu YH, Ott M, Goyal N, Du JF, Joshi M, Chen DQ, Levy O, Lewis M, Zettlemoyer L, Stoyanov V. RoBERTa: A robustly optimized
BERT pretraining approach. arXiv:1907.11692, 2019.
[43] Galárraga LA, Teflioudi C, Hose K, Suchanek F. AMIE: Association rule mining under incomplete evidence in ontological knowledge
bases. In: Proc. of the 22nd Int’l Conf. on World Wide Web. Rio de Janeiro: ACM, 2013. 413–422. [doi: 10.1145/2488388.2488425]
[44] Wang Y, Cao CG. Research on categorization of events based on event attributes. Journal of Chinese Information Processing, 2020,
34(10): 39–50 (in Chinese with English abstract). [doi: 10.3969/j.issn.1003-0077.2020.10.006]
[45] Song H, Cao CG, Wang Y, Wang S. A fine-grained annotated dataset for Chinese semantic-role labeling. Journal of Chinese Information
Processing, 2023, 37(1): 16–32 (in Chinese with English abstract). [doi: 10.3969/j.issn.1003-0077.2023.01.002]
[46] Cortes C, Vapnik V. Support-vector networks. Machine Learning, 1995, 20(3): 273–297. [doi: 10.1007/BF00994018]
[47] OpenAI. Introduction to GPT-4 and GPT-4-Turbo. 2024. https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo
[48] Song H, Cao CG, Wang Y, Wang S. Construction of a finely-grained training dataset for Chinese semantic-role labeling. Journal of
Chinese Information Processing, 2022, 36(12): 52–66, 73 (in Chinese with English abstract). [doi: 10.3969/j.issn.1003-0077.2022.12.006]
[49] Lu C. Linguistics for Knowledge Engineering. Beijing: Tsinghua University Press, 2010 (in Chinese).
[50] Wang Y. Research on common sense knowledge acquisition methods based on semantic classification [MS. Thesis]. Guilin: Guangxi
Normal University, 2015 (in Chinese with English abstract).
[51] Zhu XY. Studies on Semantic Structure Patterns of Sentences in Modern Chinese. Beijing: Peking University Press, 2001 (in Chinese).

