Page 213 - 《软件学报》2025年第4期
P. 213
孙泽辰 等: 基于可控性解释的混合数据增强框架 1619
Krueger G, Henighan T, Child R, Ramesh A, Ziegler DM, Wu J, Winter C, Hesse C, Chen M, Sigler E, Litwin M, Gray S, Chess B, Clark J,
Berner C, McCandlish S, Radford A, Sutskever I, Amodei D. Language models are few-shot learners. In: Proc. of the 34th Int’l Conf. on
Neural Information Processing Systems. Vancouver: Curran Associates Inc., 2020. 1877–1901.
[44] Lester B, Al-Rfou R, Constant N. The power of scale for parameter-efficient prompt tuning. In: Proc. of the 2021 Conf. on Empirical
Methods in Natural Language Processing. Punta Cana: Association for Computational Linguistics, 2021. 3045–3059. [doi: 10.18653/
v1/2021.emnlp-main.243]
[45] Dou SH, Zheng R, Wu T, Gao SY, Shan JJ, Zhang Q, Wu YM, Huang XJ. Decorrelate irrelevant, purify relevant: Overcome textual
spurious correlations from a feature perspective. In: Proc. of the 29th Int’l Conf. on Computational Linguistics. Gyeongju: Int’l
Committee on Computational Linguistics, 2022. 2278–2287. https://aclanthology.org/2022.coling-1.199
[46] Thorne J, Vlachos A, Cocarascu O, Christodoulopoulos C, Mittal A. The fact extraction and verification (FEVER) shared task. In: Proc.
of the 1st Workshop on Fact Extraction and Verification (FEVER). Brussels: Association for Computational Linguistics, 2018. 1–9. [doi:
10.18653/v1/W18-5501]
[47] Schuster T, Shah D, Yeo YJS, Ortiz DRF, Santus E, Barzilay R. Towards debiasing fact verification models. In: Proc. of the 2019 Conf.
on Empirical Methods in Natural Language Processing and the 9th Int’l Joint Conf. on Natural Language Processing (EMNLP-IJCNLP).
Hong Kong: Association for Computational Linguistics, 2019. 3419–3425. [doi: 10.18653/v1/D19-1341]
[48] Zhang Y, Baldridge J, He LH. PAWS: Paraphrase adversaries from word scrambling. In: Proc. of the 2019 Conf. of the North American
李俊涛(1993-), 男, 博士, 副教授, CCF 专业会
Chapter of the Association for Computational Linguistics: Human Language Technologies (Vol. 1: Long and Short Papers). Minneapolis:
Association for Computational Linguistics, 2019. 1298–1308. [doi: 10.18653/v1/N19-1131]
[49] Ott M, Edunov S, Baevski A, Fan A, Gross S, Ng N, Grangier D, Auli M. Fairseq: A fast, extensible toolkit for sequence modeling. In:
Proc. of the 2019 Conf. of the North American Chapter of the Association for Computational Linguistics (Demonstrations). Minneapolis:
Association for Computational Linguistics, 2019. 48–53. [doi: 10.18653/v1/N19-4009]
[50] Hu EJ, Shen YL, Wallis P, Allen-Zhu Z, Li YZ, Wang SA, Wang L, Chen WZ. LoRA: Low-rank adaptation of large language models.
arXiv:2106.09685, 2021.
附中文参考文献:
[23] 张大操, 张琨, 吴乐, 汪萌. 针对情境感知的自然语言的因果去偏推理方法. 计算机研究与发展, 2023, 60(8): 1768–1779. [doi:
10.7544/issn1000-1239.202330248]
孙泽辰(2000-), 女, 硕士生, 主要研究领域为自 张民(1970-), 男, 博士, 教授, 博士生导师, CCF
然语言处理. 高级会员, 主要研究领域为自然语言处理, 机器
翻译, 人工智能.
肖义胜(1999-), 男, 博士生, 主要研究领域为自 周国栋(1967-), 男, 博士, 教授, 博士生导师,
然语言处理. CCF 杰出会员, 主要研究领域为自然语言处理.
员, 主要研究领域为自然语言处理.