Page 230 - 《软件学报》2025年第4期
P. 230
1636 软件学报 2025 年第 36 卷第 4 期
[42] Zeng AH, Liu X, Du ZX, Wang ZH, Lai HY, Ding M, Yang ZY, Xu YF, Zheng WD, Xia X, Tam WL, Ma ZX, Xue YF, Zhai JD, Chen
WG, Zhang P, Dong YX, Tang J. GLM-130B: An open bilingual pre-trained model. arXiv:2210.02414, 2023.
[43] Du ZX, Qian YJ, Liu X, Ding M, Qiu JZ, Yang ZL, Tang J. GLM: General language model pretraining with autoregressive blank
infilling. In: Proc. of the 60th Annual Meeting of the Association for Computational Linguistics. Dublin: ACL, 2022. 320–335. [doi: 10.
18653/v1/2022.acl-long.26]
[44] Kim Y, Yim J, Yun J, Kim J. NLNL: Negative learning for noisy labels. In: Proc. of the 2019 IEEE/CVF Int’l Conf. on Computer Vision.
Seoul: IEEE, 2019. 101–110. [doi: 10.1109/ICCV.2019.00019]
[45] Ma RT, Gui T, Li LY, Zhang Q, Huang XJ, Zhou YQ. SENT: Sentence-level distant relation extraction via negative training. In: Proc. of
the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Int’l Joint Conf. on Natural Language Processing.
ACL, 2021. 6201–6213. [doi: 10.18653/v1/2021.acl-long.484]
[46] Stoica G, Platanios EA, Póczos B. Re-TACRED: Addressing shortcomings of the tacred dataset. In: Proc. of the 35th AAAI Conf. on
Artificial Intelligence. AAAI, 2021. 13843–13850. [doi: 10.1609/aaai.v35i15.17631]
[47] Yu JJ, Wang X, Zhao JJ, Yang CJ, Chen WL. STAD: Self-training with ambiguous data for low-resource relation extraction. In: Proc. of
the 29th Int’l Conf. on Computational Linguistics. Gyeongju: Int’l Committee on Computational Linguistics, 2022. 2044–2054.
[48] Wan Z, Cheng F, Mao ZY, Liu QY, Song HY, Li JW, Kurohashi S. GPT-RE: In-context learning for relation extraction using large
王星(1988-), 男, 博士, 高级研究员, 主要研究
language models. In: Proc. of the 2023 Conf. on Empirical Methods in Natural Language Processing. Singapore: ACL, 2023. 3534–3547.
[doi: 10.18653/v1/2023.emnlp-main.214]
[49] Touvron H, Martin L, Stone K, et al. LLAMA 2: Open foundation and fine-tuned chat models. arXiv:2307.09288, 2023.
附中文参考文献:
[14] 欧阳丹彤, 瞿剑峰, 叶育鑫. 关系抽取中基于本体的远监督样本扩充. 软件学报, 2014, 25(9): 2088–2101.http://www.jos.org.cn/1000-
9825/4638.htm [doi: 10.13328/j.cnki.jos.004638]
[18] 朱苏阳, 惠浩添, 钱龙华, 张民. 基于自监督学习的维基百科家庭关系抽取. 计算机应用, 2015, 35(4): 1013–1016, 1020. [doi: 10.
11772/j.issn.1001-9081.2015.04.1013]
[19] 胡亚楠, 舒佳根, 钱龙华, 朱巧明. 基于机器翻译的跨语言关系抽取. 中文信息学报, 2013, 27(5): 191–198. [doi: 10.3969/j.issn.1003-
0077.2013.05.028]
[27] 赵世奇, 刘挺, 李生. 复述技术研究. 软件学报, 2009, 20(8): 2124–2137. http://www.jos.org.cn/1000-9825/3587.htm [doi: 10.3724/SP.J.
1001.2009.03587]
[28] 朱鸿雨, 金志凌, 洪宇, 苏玉兰, 张民. 面向问题复述识别的定向数据增强方法. 中文信息学报, 2022, 36(9): 38–45. [doi: 10.3969/
j.issn.1003-0077.2022.09.004]
[41] 鄂海红, 张文静, 肖思琪, 程瑞, 胡莺夕, 周筱松, 牛佩晴. 深度学习实体关系抽取研究综述. 软件学报, 2019, 30(6): 1793–1818. http://
www.jos.org.cn/1000-9825/5817.htm [doi: 10.13328/j.cnki.jos.005817]
郁俊杰(1992-), 男, 博士生, 主要研究领域为自 陈文亮(1977-), 男, 博士, 教授, 博士生导师,
然语言处理, 信息抽取. CCF 高级会员, 主要研究领域为自然语言处理,
信息抽取, 知识图谱.
张民(1970-), 男, 博士, 教授, 博士生导师, CCF
领域为自然语言处理, 机器翻译, 大语言模型. 高级会员, 主要研究领域为自然语言处理, 机器
翻译, 人工智能.