Page 158 - 《软件学报》2025年第9期
P. 158

马杰 等: 基于相关性提示的知识图谱问答                                                            4069


                     Linguistics, 2022. 320–335. [doi: 10.18653/v1/2022.acl-long.26]
                  [6]   Shu WT, Li RX, Sun TX, Huang XJ, Qiu XP. Large language models: Principles, implementation, and progress. Journal of Computer
                     Research and Development, 2024, 61(2): 351–361 (in Chinese with English abstract). [doi: 10.7544/issn1000-1239.202330303]
                  [7]   Maynez J, Narayan S, Bohnet B, McDonald R. On faithfulness and factuality in abstractive summarization. In: Proc. of the 58th Annual
                     Meeting of the Association for Computational Linguistics. Stroudsbug: Association for Computational Linguistics, 2020. 1906–1919.
                     [doi: 10.18653/v1/2020.acl-main.173]
                  [8]   Agrawal G, Kumarage T, Alghamdi Z, Liu H. Can knowledge graphs reduce hallucinations in LLMs?: A survey. arXiv:2311.07914,
                     2024.
                  [9]   Zhang TC, Tian X, Sun XH, Yu MH, Sun YH, Yu G. Overview on knowledge graph embedding technology research. Ruan Jian Xue
                     Bao/Journal of Software, 2023, 34(1): 277–311 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6429.htm [doi: 10.
                     13328/j.cnki.jos.006429]
                 [10]   Baek J, Aji AF, Saffari A. Knowledge-augmented language model prompting for zero-shot knowledge graph question answering. In:
                     Proc. of the 1st Workshop on Matching from Unstructured and Structured Data. Toronto: Association for Computational Linguistics,
                     2023. 70–98. [doi: 10.18653/v1/2023.matching-1.7]
                 [11]   Kim J, Kwon Y, Jo Y, Choi E. KG-GPT: A general framework for reasoning on knowledge graphs using large language models. In:
                     Findings of the Association for Computational Linguistics: EMNLP 2023. Singapore: Association for Computational Linguistics, 2023.
                     9410–9421. [doi: 10.18653/v1/2023.findings-emnlp.631]
                 [12]   Taffa TA, Usbeck R. Leveraging LLMs in scholarly knowledge graph question answering. In: Joint Proc. of Scholarly QALD 2023 and
                     SemREC 2023 co-located with the 22nd Int’l Semantic Web Conf. ISWC 2023. Athens: CEUR-WS. org, 2023. 3592.
                 [13]   Qiao  SJ,  Yang  GP,  Yu  Y,  Han  N,  Qin  X,  Qu  LL,  Ran  LQ,  Li  H.  QA-KGNet:  Language  model-driven  knowledge  graph  question-
                     answering model. Ruan Jian Xue Bao/Journal of Software, 2023, 34(10): 4584–4600 (in Chinese with English abstract). http://www.jos.
                     org.cn/1000-9825/6882.htm [doi: 10.13328/j.cnki.jos.006882]
                 [14]   Zhang HY, Wang X, Han LF, Li Z, Chen ZR, Chen Z. Research on question answering system on joint of knowledge graph and large
                     language models. Journal of Frontiers of Computer Science & Technology, 2023, 17(10): 2377–2388 (in Chinese with English abstract).
                     [doi: 10.3778/j.issn.1673-9418.2308070]
                 [15]   Bao ZJ, Chen W, Xiao SZ, Ren K, Wu JA, Zhong C, Peng JJ, Huang XJ, Wei ZY. DISC-MedLLM: Bridging general large language
                     models and real-world medical consultation. arXiv:2308.14346, 2023.
                 [16]   Cui JX, Ning MN, Li ZJ, Chen BH, Yan Y, Li H, Ling B, Tian YH, Yuan L. Chatlaw: A multi-agent collaborative legal assistant with
                     knowledge graph enhanced mixture-of-experts large language model. arXiv:2306.16092, 2024.
                 [17]   Wang N, Yang HY, Wang CD. FinGPT: Instruction tuning benchmark for open-source large language models in financial datasets. arXiv:
                     2310.04793, 2023.
                 [18]   Wang WS, Xing M. Mechanical Manufacturing Handbook. Shenyang: Liaoning Science and Technology Publishing House, 2002 (in
                     Chinese).
                 [19]   Agarwal A, Gawade S, Azad AP, Bhattacharyya P. KITLM: Domain-specific knowledge integration into language models for question
                     answering. arXiv:2308.03638, 2023.
                 [20]   Bulian  J,  Buck  C,  Gajewski  W,  Börschinger  B,  Schuster  T.  Tomayto,  tomahto.  Beyond  token-level  answer  equivalence  for  question
                     answering evaluation. In: Proc. of the 2022 Conf. on Empirical Methods in Natural Language Processing. Abu Dhabi: Association for
                     Computational Linguistics, 2022. 291–305. [doi: 10.18653/v1/2022.emnlp-main.20]
                 [21]   Touvron H, Lavril T, Izacard G, Martinet X, Lachaux MA, Lacroix T, Rozière B, Goyal N, Hambro E, Azhar F, Rodriguez A, Joulin A,
                     Grave E, Lample G. LLaMA: Open and efficient foundation language models. arXiv:2302.13971, 2023.
                 [22]   Gao YF, Xiong Y, Gao XY, Jia KX, Pan JL, Bi YX, Dai Y, Sun JW, Wang M, Wang HF. Retrieval-augmented generation for large
                     language models: A survey. arXiv:2312.10997, 2024.
                 [23]   Che WX, Dou ZC, Feng YS, Gui T, Han XP, Hu BT, Huang ML, Huang XJ, Liu K, Liu T, Liu ZY, Qin B, Qiu XP, Wan XJ, Wang YX,
                     Wen JR, Yan R, Zhang JJ, Zhang M, Zhang Q, Zhao J, Zhao X, Zhao YY. Towards a comprehensive understanding of the impact of large
                     language  models  on  natural  language  processing:  Challenges,  opportunities  and  future  directions.  SCIENTIA  SINICA  Informationis,
                     2023, 53(9): 1645–1687 (in Chinese with English abstract). [doi: 10.1360/SSI-2023-0113]
                 [24]   Hu EJ, Shen YL, Wallis P, Allen-Zhu Z, Li YZ, Wang SA, Wang L, Chen WZ. LoRA: Low-rank adaptation of large language models. In:
                     Proc. of the 10th Int’l Conf. on Learning Representations. 2022. [doi: 10.48550/arXiv.2106.09685]
                 [25]   Liu X, Ji KX, Fu YC, Tam W, Du ZX, Yang ZL, Tang J. P-Tuning: Prompt tuning can be comparable to fine-tuning across scales and
                     tasks. In: Proc. of the 60th Annual Meeting of the Association for Computational Linguistics. Dublin: Association for Computational
                     Linguistics, 2022. 61–68. [doi: 10.18653/v1/2022.acl-short.8]
                 [26]   Liu X, Ji KX, Fu YC, Tam WL, Du ZX, Yang ZL, Tang J. P-Tuning v2: Prompt tuning can be comparable to fine-tuning universally
   153   154   155   156   157   158   159   160   161   162   163