Page 159 - 《软件学报》2025年第9期
P. 159

4070                                                       软件学报  2025  年第  36  卷第  9  期


                     across scales and tasks. arXiv:2110.07602, 2022.
                 [27]   Wang  HC,  Liu  C,  Xi  NW,  Qiang  ZW,  Zhao  SD,  Qin  B,  Liu  T.  HuaTuo:  Tuning  LLaMA  model  with  Chinese  medical  knowledge.
                     arXiv:2304.06975, 2023.
                 [28]   Griffith S, Subramanian K, Scholz J, Isbell CL, Thomaz A. Policy shaping: Integrating human feedback with reinforcement learning. In:
                     Proc. of the 26th Int’l Conf. on Neural Information Processing Systems. Lake Tahoe: Curran Associates Inc., 2013. 2625–2633.
                 [29]   Lewis P, Perez E, Piktus A, Petroni F, Karpukhin V, Goyal N, Küttler H, Lewis M, Yih WT, Rocktäschel T, Riedel S, Kiela D. Retrieval-
                     augmented generation for knowledge-intensive NLP tasks. In: Proc. of the 34th Conf. on Neural Information Processing Systems. 2020.
                     9459–9474.
                 [30]   Guu K, Lee K, Tung Z, Pasupat P, Chang MW. Retrieval augmented language model pre-training. In: Proc. of the 37th Int’l Conf. on
                     Machine Learning. 2020. 3929–3938.
                                                                                        ®
                 [31]   Robertson S, Zaragoza H. The probabilistic relevance framework: BM25 and beyond. Foundations and Trends  in Information Retrieval,
                     2009, 3(4): 333–389. [doi: 10.1561/1500000019]
                 [32]   Levonian Z, Li CL, Zhu WD, Gade A, Henkel O, Postle ME, Xing WL. Retrieval-augmented generation to improve math question-
                     answering: Trade-offs between groundedness and human preference. arXiv:2310.03184, 2023.
                 [33]   Yu WH, Iter D, Wang SH, Xu YC, Ju MX, Sanyal S, Zhu CG, Zeng M, Jiang M. Generate rather than retrieve: Large language models
                     are strong context generators. In: Proc. of the 11th Int’l Conf. on Learning Representations. 2023. https://iclr.cc/virtual/2023/poster/12027
                 [34]   Ma XB, Gong YY, He PC, Zhao H, Duan N. Query rewriting in retrieval-augmented large language models. In: Proc. of the 2023 Conf.
                     on Empirical Methods in Natural Language Processing. Singapore: Association for Computational Linguistics, 2023. 5303–5315. [doi: 10.
                     18653/v1/2023.emnlp-main.322]
                 [35]   Gao LY, Ma XG, Lin J, Callan J. Precise zero-shot dense retrieval without relevance labels. In: Proc. of the 61st Annual Meeting of the
                     Association for Computational Linguistics. Toronto: Association for Computational Linguistics, 2023. 1762–1777. [doi: 10.18653/v1/
                     2023.acl-long.99]
                 [36]   Huang DR, Wei ZZ, Yue AZ, Zhao X, Chen ZL, Li R, Jiang K, Chang BX, Zhang QL, Zhang SJ, Zhang Z. DSQA-LLM: Domain-
                     specific intelligent question answering based on large language model. In: Proc. of the 1st Int’l Conf. on AI-generated Content. Shanghai:
                     Springer, 2023. 170–180. [doi: 10.1007/978-981-99-7587-7_14]
                 [37]   Yu DH, Zhu CG, Fang YW, Yu WH, Wang SH, Xu YC, Ren X, Yang YM, Zeng M. KG-FiD: Infusing knowledge graph in fusion-in-
                     decoder for open-domain question answering. In: Proc. of the 60th Annual Meeting of the Association for Computational Linguistics.
                     Dublin: Association for Computational Linguistics, 2022. 4961–4974. [doi: 10.18653/v1/2022.acl-long.340]
                 [38]   Tan YM, Min DH, Li Y, Li WB, Hu N, Chen YR, Qi GL. Can ChatGPT replace traditional KBQA models? An in-depth analysis of the
                     question answering performance of the GPT LLM family. arXiv:2303.07992, 2023.
                 [39]   Omar R, Mangukiya O, Kalnis P, Mansour E. ChatGPT versus traditional question answering for knowledge graphs: Current status and
                     future directions towards knowledge graph Chatbots. arXiv:2302.06466, 2023.
                 [40]   Wu YK, Hu N, Bi S, Qi GL, Ren J, Xie AH, Song W. Retrieve-rewrite-answer: A KG-to-text enhanced LLMs framework for knowledge
                     graph question answering. arXiv:2309.11206, 2023.
                 [41]   Soman K, Rose PW, Morris JH, Akbas RE, Smith B, Peetoom B, Villouta-Reyes C, Cerono G, Shi YM, Rizk-Jackson A, Israni S, Nelson
                     CA, Huang S, Baranzini SE. Biomedical knowledge graph-optimized prompt generation for large language models. arXiv:2311.17330,
                     2024.
                 [42]   Wang XT, Yang QW, Qiu YT, Liang JQ, He QY, Gu ZH, Xiao YH, Wang W. KnowledGPT: Enhancing large language models with
                     retrieval and storage access on knowledge bases. arXiv:2308.11761, 2023.
                 [43]   Hu  CX,  Fu  J,  Du  CZ,  Luo  SM,  Zhao  JB,  Zhao  H.  ChatDB:  Augmenting  LLMs  with  databases  as  their  symbolic  memory.  arXiv:
                     2306.03901, 2023.
                 [44]   Li ZY, Fan SQ, Gu Y, Li XX, Duan ZC, Dong BW, Liu N, Wang JY. FlexKBQA: A flexible LLM-powered framework for few-shot
                     knowledge base question answering. In: Proc. of the 38th AAAI Conf. on Artificial Intelligence. Vancouver: AAAI, 2024. 18608–18616.
                     [doi: 10.1609/aaai.v38i17.29823]
                 [45]   Agarwal D, Das R, Khosla S, Gangadharaiah R. Bring your own KG: Self-supervised program synthesis for zero-shot KGQA. arXiv:
                     2311.07850, 2024.
                 [46]   Chen YL, Zhang YM, Yu JF, Yang L, Xia R. In-context learning for knowledge base question answering for unmanned systems based on
                     large  language  models.  In:  Proc.  of  the  8th  China  Conf.  on  Knowledge  Graph  and  Semantic  Computing.  Shenyang:  Springer,  2023.
                     327–339. [doi: 10.1007/978-981-99-7224-1_26]
                 [47]   Chu XT, Liu JP, Wang J, Wang XF, Wang YF, Wang M, Gu XX. CSDR-BERT: A pre-trained scientific dataset match model for Chinese
                     scientific dataset retrieval. arXiv:2301.12700, 2023.
                 [48]   Liu NF, Lin K, Hewitt J, Paranjape A, Bevilacqua M, Petroni F, Liang P. Lost in the middle: How language models use long contexts.
   154   155   156   157   158   159   160   161   162   163   164