Page 229 - 《软件学报》2025年第4期
P. 229
郁俊杰 等: 面向低资源关系抽取的自训练方法 1635
Neural Information Processing Systems. Vancouver: Curran Associates Inc., 2020. 323. [doi: 10.5555/3495724.3496047]
[21] Lee DH. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In: Proc. of the 2013
Workshop on Challenges in Representation Learning. 2013. 896.
[22] Zhang JJ, Zong CQ. Exploiting source-side monolingual data in neural machine translation. In: Proc. of the 2016 Conf. on Empirical
Methods in Natural Language Processing. Austin: ACL, 2016. 1535–1545. [doi: 10.18653/v1/D16-1160]
[23] Sachan M, Xing E. Self-training for jointly learning to ask and answer questions. In: Proc. of the 2018 Conf. of the North American
Chapter of the Association for Computational Linguistics: Human Language Technologies. New Orleans: ACL, 2018. 629–640. [doi: 10.
18653/v1/N18-1058]
[24] Rotman G, Reichart R. Deep contextualized self-training for low resource dependency parsing. Trans. of the Association for
Computational Linguistics, 2019, 7: 695–713. [doi: 10.1162/tacl_a_00294]
[25] Hu XM, Zhang CW, Ma FK, Liu CY, Wen LJ, Philip SY. Semi-supervised relation extraction via incremental meta self-training. In: Proc.
of the 2021 Findings of the Association for Computational Linguistics. Punta Cana: ACL, 2021. 487–496. [doi: 10.18653/v1/2021.
findings-emnlp.44]
[26] Xu BF, Wang Q, Lyu YJ, Dai D, Zhang YD, Mao ZD. S2ynRE: Two-stage self-training with synthetic data for low-resource relation
extraction. In: Proc. of the 61st Annual Meeting of the Association for Computational Linguistics. Toronto: ACL, 2023. 8186–8207.
[doi: 10.18653/v1/2023.acl-long.455]
[27] Zhao SQ, Liu T, Li S. Research on paraphrasing technology. Ruan Jian Xue Bao/Journal of Software, 2009, 20(8): 2124–2137 (in
Chinese with English abstract). http://www.jos.org.cn/1000-9825/3587.htm [doi: 10.3724/SP.J.1001.2009.03587]
[28] Zhu HY, Jin ZL, Hong Y, Su YL, Zhang M. Directional data augmentation for question paraphrase identification. Journal of Chinese
Information Processing, 2022, 36(9): 38–45 (in Chinese with English abstract). [doi: 10.3969/j.issn.1003-0077.2022.09.004]
[29] Wei J, Bosma M, Zhao VY, Guu K, Yu AW, Lester B, Du N, Dai AM, Le QV. Finetuned language models are zero-shot learners. In:
Proc. of the 10th Int’l Conf. on Learning Representations. 2022.
[30] Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S, Herbert-Voss
A, Krueger G, Henighan T, Child R, Ramesh A, Ziegler DM, Wu J, Winter C, Hesse C, Chen M, Sigler E, Litwin M, Gray S, Chess B,
Clark J, Berner C, McCandlish S, Radford A, Sutskever I, Amodei D. Language models are few-shot learners. In: Proc. of the 34th Conf.
on Neural Information Processing Systems. 2020. 1877–1901.
[31] Kojima T, Gu SS, Reid M, Matsuo Y, Iwasawa Y. Large language models are zero-shot reasoners. In: Proc. of the 36th Conf. on Neural
Information Processing Systems. 2022. 22199–22213.
[32] AAAI, 2019. 3542–3549. [doi: 10.1609/aaai.v33i01.33013542]
Liu PF, Yuan WZ, Fu JL, Jiang ZB, Hayashi H, Neubig G. Pre-train, prompt, and predict: A systematic survey of prompting methods in
natural language processing. ACM Computing Surveys, 2023, 55(9): 195. [doi: 10.1145/3560815]
[33] Tang TY, Lu HY, Jiang YE, Huang HY, Zhang DD, Zhao WX, Kocmi T, Wei FR. Not all metrics are guilty: Improving NLG evaluation
by diversifying references. arXiv:2305.15067, 2024.
[34] Cour T, Sapp B, Taskar B. Learning from partial labels. The Journal of Machine Learning Research, 2011, 12: 1501–1536. [doi: 10.5555/
1953048.2021049]
[35] Li ZH, Zhang M, Chen WL. Ambiguity-aware ensemble training for semi-supervised dependency parsing. In: Proc. of the 52nd Annual
Meeting of the Association for Computational Linguistics. Baltimore: ACL, 2014. 457–467. [doi: 10.3115/v1/P14-1043]
[36] Xie MK, Huang SJ. Partial multi-label learning with noisy label identification. In: Proc. of the 34th AAAI Conf. on Artificial Intelligence.
New York: AAAI, 2020. 6454–6461. [doi: 10.1609/aaai.v34i04.6117]
[37] Nguyen N, Caruana R. Classification with partial labels. In: Proc. of the 14th ACM SIGKDD Int’l Conf. on Knowledge Discovery and
Data Mining. Las Vegas: ACM, 2008. 551–559. [doi: 10.1145/1401890.1401958]
[38] Feng L, An B. Partial label learning with self-guided retraining. In: Proc. of the 33rd AAAI Conf. on Artificial Intelligence. Honolulu:
[39] Yan Y, Guo YH. Partial label learning with batch label correction. In: Proc. of the 34th AAAI Conf. on Artificial Intelligence. New York:
AAAI, 2020. 6575–6582. [doi: 10.1609/aaai.v34i04.6132]
[40] Wu DD, Wang DB, Zhang ML. Revisiting consistency regularization for deep partial label learning. In: Proc. of the 39th Int’l Conf. on
Machine Learning. 2022. 24212–24225.
[41] E HH, Zhang WJ, Xiao SQ, Cheng R, Hu YX, Zhou XS, Niu PQ. Survey of entity relationship extraction based on deep learning. Ruan
Jian Xue Bao/Journal of Software, 2019, 30(6): 1793–1818 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/5817.
htm [doi: 10.13328/j.cnki.jos.005817]