Page 370 - 《软件学报》2025年第10期
P. 370

刘一丁 等: 针对   LLM  对话属性情感理解的多代理一致性反思                                              4767


                 [24]   Zhang  JJ,  Hou  YP,  Xie  RB,  Sun  WQ,  Mcauley  J,  Zhao  WX,  Lin  LY,  Wen  JR.  AgentCF:  Collaborative  learning  with  autonomous
                     language agents for recommender systems. arXiv:2310.09233, 2023.
                 [25]   Xu ZR, Shi SB, Hu BT, Yu JD, Li DF, Zhang M, Wu YX. Towards reasoning in large language models via multi-agent peer review
                     collaboration. arXiv:2311.08152, 2023.
                 [26]   Liu ZY, Lai ZQ, Gao ZW, Cui EF, Li ZH, Zhu XZ, Lu LW, Chen QF, Qiao Y, Dai JF, Wang WH. ControlLLM: Augment language
                     models with tools by searching on graphs. arXiv:2310.17796, 2023.
                 [27]   Liu JC, Shen DH, Zhang YZ, Dolan B, Carin L, Chen WZ. What makes good in-context examples for GPT-3? In: Proc. of the Deep
                     Learning Inside Out: The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures. Dublin: ACL, 2022.
                     100–114. [doi: 10.18653/v1/2022.deelio-1.10]
                 [28]   Yao  SY,  Zhao  J,  Yu  D,  Du  N,  Shafran  I,  Narasimhan  K,  Cao  Y.  ReAct:  Synergizing  reasoning  and  acting  in  language  models.
                     arXiv:2210.03629, 2023.
                 [29]   Wei J, Wang XZ, Schuurmans D, Bosma M, Ichter B, Xia F, Chi EH, Le QV, Zhou D. Chain-of-thought prompting elicits reasoning in
                     large language models. In: Proc. of the 36th Int’l Conf. on Neural Information Processing Systems. New Orleans: Curran Associates Inc.,
                     2022. 24824–24837.
                 [30]   Yao SY, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, Narasimhan K. Tree of thoughts: Deliberate problem solving with large language
                     models.  In:  Proc.  of  the  37th  Int’l  Conf.  on  Neural  Information  Processing  Systems.  New  Orleans:  Curran  Associates  Inc.,  2023.
                     11809–11822.
                 [31]   Shinn N, Labash B, Gopinath A. Reflexion: An autonomous agent with dynamic memory and self-reflection. arXiv:2303.11366, 2023.
                 [32]   Shinn N, Cassano F, Gopinath A, Narasimhan K, Yao SY. Reflexion: Language agents with verbal reinforcement learning. In: Proc. of
                     the 37th Int’l Conf. on Neural Information Processing Systems. New Orleans: Curran Associates Inc., 2023. 8634–8652.
                 [33]   Huang  X,  Lian  JX,  Lei  YX,  Yao  J,  Lian  DF,  Xie  X.  Recommender  AI  agent:  Integrating  large  language  models  for  interactive
                     recommendations. arXiv:2308.16505, 2024.
                 [34]   Zhang WX, Deng Y, Liu B, Pan SJ, Bing LD. Sentiment analysis in the era of large language models: A reality check. arXiv:2305.15005,
                     2023.
                 [35]   Wei J, Bosma M, Zhao VY, Guu K, Yu AW, Lester B, Du N, Dai AM, Le QV. Finetuned language models are zero-shot learners.
                     arXiv:2109.01652, 2022. [DOI: 10.48550/arXiv.2109.01652]
                 [36]   Cai HJ, Xia R, Yu JF. Aspect-category-opinion-sentiment quadruple extraction with implicit aspects and opinions. In: Proc. of the 59th
                     Annual Meeting of the Association for Computational Linguistics and the 11th Int’l Joint Conf. on Natural Language Processing, Vol. 1
                     (Long Papers). ACL, 2021. 340–350. [doi: 10.18653/v1/2021.acl-long.29]
                 [37]   Hu EJ, Shen YL, Wallis P, Allen-Zhu A, Li YZ, Wang SA, Wang L, Chen WZ. LoRA: Low-rank adaptation of large language models.
                     arXiv:2106.09685, 2021.
                 [38]   Kingma DP, Ba J. Adam: A method for stochastic optimization. arXiv:1412.6980, 2017.
                 [39]   Yang YM, Liu X. A re-examination of text categorization methods. In: Proc. of the 22nd Annual Int’l ACM SIGIR Conf. on Research
                     and Development in Information Retrieval. Berkeley: ACM, 1999. 42–49.
                 [40]   Zhang WX, Li X, Deng Y, Bing LD, Lam W. Towards generative aspect-based sentiment analysis. In: Proc. of the 59th Annual Meeting
                     of the Association for Computational Linguistics and the 11th Int’l Joint Conf. on Natural Language Processing, Vol. 2 (Short Papers).
                     ACL, 2021. 504–510. [doi: 10.18653/v1/2021.acl-short.64]
                 [41]   Wei X, Cui XY, Cheng N, Wang XB, Zhang X, Huang S, Xie PJ, Xu J, Chen YF, Zhang MS, Jiang Y, Han WJ. ChatIE: Zero-shot
                     information extraction via chatting with ChatGPT. arXiv:2302.10205, 2024.
                 [42]   Yang AY, Xiao B, Wang BN, et al. Baichuan 2: Open large-scale language models. arXiv:2309.10305, 2023.

                             刘一丁(1999-), 男, 硕士生, 主要研究领域为自                 罗佳敏(1997-), 女, 博士生, CCF 学生会员, 主
                            然语言处理.                                       要研究领域为自然语言处理.




                             王晶晶(1990-), 男, 博士, 副教授, CCF  专业会             周国栋(1965-), 男, 博士, 教授, 博士生导师,

                            员, 主要研究领域为自然语言处理.                            CCF  杰出会员, 主要研究领域为自然语言处理.
   365   366   367   368   369   370   371   372   373   374   375