Page 354 - 《软件学报》2025年第12期
P. 354
余建兴 等: 基于常识推理问答的多模态题文不符检测 5735
2053]
[25] Willert N, Thiemann J. Template-based generator for single-choice questions. Journal of Technology, Knowledge and Learning, 2024,
29(1): 355–370. [doi: 10.1007/s10758-023-09659-5]
[26] Yu JX, Quan XJ, Su QL, Yin J. Generating multi-hop reasoning questions to improve machine reading comprehension. In: Proc. of the
Web Conf. 2020. Taipei: ACM, 2020. 281–291. [doi: 10.1145/3366423.3380114]
[27] Ang BH, Gollapalli SD, Ng SK. Socratic question generation: A novel dataset, models, and evaluation. In: Proc. of the 17th Conf. of the
European Chapter of the Association for Computational Linguistics. Dubrovnik: Association for Computational Linguistics, 2023.
147–165. [doi: 10.18653/v1/2023.eacl-main.12]
[28] Du XY, Shao JR, Cardie C. Learning to ask: Neural question generation for reading comprehension. In: Proc. of the 55th Annual Meeting
of the Association for Computational Linguistics. Vancouver: Association for Computational Linguistics, 2017. 1342–1352. [doi: 10.
18653/v1/P17-1123]
[29] Su D, Xu Y, Winata GI, Xu P, Kim H, Liu ZH, Fung P. Generalizing question answering system with pre-trained language model fine-
tuning. In: Proc. of the 2nd Workshop on Machine Reading for Question Answering. Hong Kong: Association for Computational
Linguistics, 2019. 203–211. [doi: 10.18653/v1/D19-5827]
[30] Wang WC, Feng S, Wang DL, Zhang YF. Answer-guided and semantic coherent question generation in open-domain conversation. In:
Proc. of the 2019 Conf. on Empirical Methods in Natural Language Processing and the 9th Int’l Joint Conf. on Natural Language
Processing. Hong Kong: Association for Computational Linguistics, 2019. 5066–5076. [doi: 10.18653/v1/D19-1511]
[31] Yang JY, Dong YH, Qian JB. Research progress of few-shot learning methods based on graph neural networks. Journal of Computer
Research and Development, 2024, 61(4): 856–876 (in Chinese with English abstract). [doi: 10.7544/issn1000-1239.202220933] [doi: 10.
7544/issn1000–1239.202220933]
[32] Bao JW, Gong YY, Duan N, Zhou M, Zhao TJ. Question generation with doubly adversarial nets. IEEE/ACM Trans. on Audio, Speech,
and Language Processing, 2018, 26(11): 2230–2239. [doi: 10.1109/TASLP.2018.2859777]
[33] Wang JY, Li JL, Zhao H. Self-Prompted chain-of-thought on large language models for open-domain multi-hop reasoning. In: Findings
of the Association for Computational Linguistics: EMNLP 2023. Singapore: Association for Computational Linguistics, 2023.
2717–2731. [doi: 10.18653/v1/2023.findings-emnlp.179]
[34] Wang LY, Xu ZH, Lin ZB, Zheng HT, Shen Y. Answer-driven deep question generation based on reinforcement learning. In: Proc. of the
28th Int’l Conf. on Computational Linguistics. Barcelona: International Committee on Computational Linguistics, 2020. 5159–5170. [doi:
10.18653/v1/2020.coling-main.452]
[35] Kulshreshtha S, Rumshisky A. Reasoning circuits: Few-shot multi-hop question generation with structured rationales. In: Proc. of the 1st
Workshop on Natural Language Reasoning and Structured Explanations. Toronto: Association for Computational Linguistics, 2023.
59–77. [doi: 10.18653/v1/2023.nlrse-1.6]
[36] Jia X, Wang H, Yin DW, Wu YF. Enhancing question generation with commonsense knowledge. In: Proc. of the 20th China National
Conf. on Chinese Computational Linguistics. Huhhot: Springer, 2021. 145–160. [doi: 10.1007/978-3-030-84186-7_10]
[37] Li ZP, Cao Z, Li PF, Zhong Y, Li SB. Multi-Hop question generation with knowledge graph-enhanced language model. Applied
Sciences, 2023, 13(9): 5765. [doi: 10.3390/app13095765]
[38] Yu WH, Zhu CG, Qin LH, Zhang ZH, Zhao T, Jiang M. Diversifying content generation for commonsense reasoning with mixture of
knowledge graph experts. In: Findings of the Association for Computational Linguistics: ACL 2022. Dublin: Association for
Computational Linguistics, 2022. 1896–1906. [doi: 10.18653/v1/2022.findings-acl.149]
[39] Yu JX, Wang SQ, Zheng LB, Su QL, Liu W, Zhao BQ, Yin J. Generating deep questions with commonsense reasoning ability from the
text by disentangled adversarial inference. In: Findings of the Association for Computational Linguistics. Toronto: Association for
Computational Linguistics, 2023. 470–486. [doi: 10.18653/v1/2023.findings-acl.30]
[40] Zhao WM, Alwidian S, Mahmoud QH. Adversarial training methods for deep learning: A systematic review. Journal of Algorithms,
2022, 15(8): 283. [doi: 10.3390/a15080283]
[41] Gao YF, Bing LD, Chen W, Lyu MR, King I. Difficulty controllable generation of reading comprehension questions. In: Proc. of the 28th
Int’l Joint Conf. on Artificial Intelligence. Macao, 2019. 4968–4974.
[42] Kumar V, Hua YC, Ramakrishnan G, Qi GL, Gao LL, Li YF. Difficulty-controllable multi-hop question generation from knowledge
graphs. In: Proc. of the 18th Int’l Semantic Web Conf. Auckland: Springer, 2019. 382–398. [doi: 10.1007/978-3-030-30793-6_22]
[43] Bi S, Liu JY, Miao ZY, Min QZ. Difficulty-controllable question generation over knowledge graphs: A counterfactual reasoning
approach. Information Processing & Management, 2024, 61(4): 103721. [doi: 10.1016/j.ipm.2024.103721]
[44] Yang KC, Deng JK, An X, Li JW, Feng ZY, Guo J, Yang J, Liu TL. ALIP: Adaptive language-image pre-training with synthetic caption.

