Page 211 - 《软件学报》2025年第4期
P. 211
孙泽辰 等: 基于可控性解释的混合数据增强框架 1617
Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, 2020. 7597–7610. [doi: 10.18653/v1/
2020.emnlp-main.613]
[6] Tu LF, Lalwani G, Gella S, He H. An empirical study on robustness to spurious correlations using pre-trained language models. Trans. of
the Association for Computational Linguistics, 2020, 8: 621–633. [doi: 10.1162/tacl_a_00335]
[7] McCoy RT, Pavlick E, Linzen T. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In: Proc. of
the 57th Annual Meeting of the Association for Computational Linguistics. Florence: Association for Computational Linguistics, 2020.
3428–3448. [doi: 10.18653/v1/P19-1334]
[8] Gururangan S, Swayamdipta S, Levy O, Schwartz R, Bowman S, Smith NA. Annotation artifacts in natural language inference data. In:
Proc. of the 2018 Conf. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
(Vol. 2: Short Papers). New Orleans: Association for Computational Linguistics, 2018. 107–112. [doi: 10.18653/v1/N18-2017]
[9] Geirhos R, Jacobsen JH, Michaelis C, Zemel R, Brendel W, Bethge M, Wichmann FA. Shortcut learning in deep neural networks. Nature
Machine Intelligence, 2020, 2(11): 665–673. [doi: 10.1038/s42256-020-00257-z]
[10] Schwartz R, Stanovsky G. On the limitations of dataset balancing: The lost battle against spurious correlations. In: Proc. of the 2022
Findings of the Association for Computational Linguistics. Seattle: Association for Computational Linguistics, 2022. 2182–2194. [doi: 10.
18653/v1/2022.findings-naacl.168]
[11] Du MN, He FX, Zou N, Tao DC, Hu X. Shortcut learning of large language models in natural language understanding. Communications
of the ACM, 2023, 67(1): 110–120. [doi: 10.1145/3596490]
[12] Touvron H, Lavril T, Izacard G, Martinet X, Lachaux MA, Lacroix T, Rozière B, Goyal N, Hambro E, Azhar F, Rodriguez A, Joulin A,
Grave E, Lample G. LLaMA: Open and efficient foundation language models. arXiv:2302.13971, 2023.
[13] Touvron H, Martin L, Stone K, et al. LLaMA 2: Open foundation and fine-tuned chat models. arXiv:2307.09288, 2023.
[14] Chung HW, Hou L, Longpre S, et al. Scaling instruction-finetuned language models. Journal of Machine Learning Research, 2024,
25(70): 1–53.
[15] Ouyang L, Wu J, Jiang X, Almeida D, Wainwright CL, Mishkin P, Zhang C, Agarwal S, Slama K, Ray A, Schulman J, Hilton J, Kelton F,
Miller L, Simens M, Askell A, Welinder P, Christiano P, Leike J, Lowe R. Training language models to follow instructions with human
feedback. In: Proc. of the 36th Int’l Conf. on Neural Information Processing Systems. New Orleans: Curran Associates Inc., 2022.
27730–27744.
[16] Niven T, Kao HY. Probing neural network comprehension of natural language arguments. In: Proc. of the 57th Annual Meeting of the
Association for Computational Linguistics. Florence: Association for Computational Linguistics, 2019. 4658–4664. [doi: 10.18653/v1/
P19-1459]
[17] Lai YX, Zhang C, Feng YS, Huang QZ, Zhao DY. Why machine reading comprehension models learn shortcuts? In: Proc. of the 2021
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Association for Computational Linguistics, 2021.
989–1002. [doi: 10.18653/v1/2021.findings-acl.85]
[18] Liu F, Avci B. Incorporating priors with feature attribution on text classification. In: Proc. of the 57th Annual Meeting of the Association
for Computational Linguistics. Florence: Association for Computational Linguistics, 2019. 6274–6283. [doi: 10.18653/v1/P19-1631]
[19] Han XC, Tsvetkov Y. Influence tuning: Demoting spurious correlations via instance attribution and instance-driven updates. In: Proc. of
the 2021 Findings of the Association for Computational Linguistics: EMNLP 2021. Punta Cana: Association for Computational
Linguistics, 2021. 4398–4409. [doi: 10.18653/v1/2021.findings-emnlp.374]
[20] Clark C, Yatskar M, Zettlemoyer L. Don’t take the easy way out: Ensemble based methods for avoiding known dataset biases. In: Proc. of
the 2019 Conf. on Empirical Methods in Natural Language Processing and the 9th Int’l Joint Conf. on Natural Language Processing
(EMNLP-IJCNLP). Hong Kong: Association for Computational Linguistics, 2019. 4069–4082. [doi: 10.18653/v1/D19-1418]
[21] He H, Zha S, Wang HH. Unlearn dataset bias in natural language inference by fitting the residual. In: Proc. of the 2nd Workshop on Deep
Learning Approaches for Low-resource Natural Language Processing. Hong Kong: Association for Computational Linguistics, 2019.
132–142. [doi: 10.18653/v1/D19-6115]
[22] Sanh V, Wolf T, Belinkov Y, Rush AM. Learning from others’ mistakes: Avoiding dataset biases without modeling them. arXiv:
2012.01300, 2020.
[23] Zhang DC, Zhang K, Wu L, Wang M. Causal-based debiased reasoning method for grounded textual entailment. Journal of Computer
Research and Development, 2023, 60(8): 1768–1779 (in Chinese with English abstract). [doi: 10.7544/issn1000-1239.202330248]
[24] Nam J, Cha H, Ahn S, Lee J, Shin J. Learning from failure: Training debiased classifier from biased classifier. In: Proc. of the 34th Int’l
Conf. on Neural Information Processing Systems. Vancouver: Curran Associates Inc., 2020. 20673–20684.
[25] Liu EZ, Haghgoo B, Chen AS, Raghunathan A, Koh PW, Sagawa S, Liang P, Finn C. Just train twice: Improving group robustness