Page 355 - 《软件学报》2025年第12期
P. 355
5736 软件学报 2025 年第 36 卷第 12 期
In: Proc. of the 2023 IEEE/CVF Int’l Conf. on Computer Vision. Paris: IEEE, 2023. 2910–2919. [doi: 10.1109/ICCV51070.2023.00273]
[45] Li MH, Lv TC, Chen JY, Cui L, Lu YJ, Florencio D, Zhang C, Li ZJ, Wei FR. TROCR: Transformer-based optical character recognition
with pre-trained models. In: Proc. of the 37th AAAI Conf. on Artificial Intelligence. Washington: AAAI, 2023. 13094–13102. [doi: 10.
1609/aaai.v37i11.26538]
[46] Dozat T, Manning CD. Deep biaffine attention for neural dependency parsing. arXiv:1611.01734, 2017.
[47] Du XY, Liu MW, Shen LW, Peng X. Survey on representation learning methods of knowledge graph for link prediction. Ruan Jian Xue
Bao/Journal of Software, 2024, 35(1): 87–117 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6902.htm [doi: 10.
13328/j.cnki.jos.006902]
[48] Xu FY, Shi WJ, Choi E. RECOMP: Improving retrieval-augmented LMs with compression and selective augmentation.
arXiv:2310.04408, 2023.
[49] Wang QF, Wang JG, Quan XJ, Feng FL, Xu ZL, Nie SL, Wang SN, Khabsa M, Firooz H, Liu DF. MUSTIE: Multimodal structural
Transformer for Web information extraction. In: Proc. of the 61st Annual Meeting of the Association for Computational Linguistics.
Toronto: Association for Computational Linguistics, 2023. 2405–2420. [doi: 10.18653/v1/2023.acl-long.135]
[50] Speer R, Chin J, Havasi C. ConceptNet 5.5: An open multilingual graph of general knowledge. In: Proc. of the 31st AAAI Conf. on
Artificial Intelligence. San Francisco: AAAI, 2017. 4444–4451. [doi: 10.1609/aaai.v31i1.11164]
[51] Deng Z, Zhu Y, Chen Y, Witbrock M, Riddle P. Interpretable AMR-based question decomposition for multi-hop question answering. In:
Proc. of the 31st Int’l Joint Conf. on Artificial Intelligence. Vienna, 2022. 4093–4099.
[52] Cao SY, Wang L. Controllable open-ended question generation with a new question type ontology. In: Proc. of the 59th Annual Meeting
of the Association for Computational Linguistics and Proc. of the 11th Int’l Joint Conf. on Natural Language Processing. Association for
Computational Linguistics, 2021. 6424–6439. [doi: 10.18653/v1/2021.acl-long.502]
[53] Wolf T, Debut L, Sanh V, Chaumond J, Delangue C, Moi A, Cistac P, Rault T, Louf R, Funtowicz M, Davison J, Shleifer S, von Platen P,
Ma C, Jernite Y, Plu J, Xu CW, Le Scao T, Gugger S, Drame M, Lhoest Q, Rush A. Transformers: State-of-the-art natural language
processing. In: Proc. of the 2020 Conf. on Empirical Methods in Natural Language Processing: System Demonstrations. Association for
Computational Linguistics, 2020. 38–45. [doi: 10.18653/v1/2020.emnlp-demos.6]
[54] Rajpurkar P, Zhang J, Lopyrev K, Liang P. SQuAD: 100, 000+ questions for machine comprehension of text. In: Proc. of the 2016 Conf.
on Empirical Methods in Natural Language Processing. Austin: Association for Computational Linguistics, 2016. 2383–2392. [doi: 10.
18653/v1/D16-1264]
[55] Li XL, Liang P. Prefix-tuning: Optimizing continuous prompts for generation. In: Proc. of the 59th Annual Meeting of the Association for
Computational Linguistics and the 11th Int’l Joint Conf. on Natural Language Processing. Association for Computational Linguistics,
2021. 4582–4597. [doi: 10.18653/v1/2021.acl-long.353]
[56] Honnibal M, Montani I. spaCy 2: Natural language understanding with bloom embeddings, convolutional neural networks and
incremental parsing. Journal of To appear, 2017, 7(1): 411–420.
[57] Holtzman A, Buys J, Du L, Forbes M, Choi Y. The curious case of neural text degeneration. arXiv:1904.09751, 2020.
[58] Shaheer S, Hossain I, Sarna SN, Mehedi MHK, Rasel AA. Evaluating question generation models using qa systems and semantic textual
similarity. In: Proc. of the 13th IEEE Annual Computing and Communication Workshop and Conf. Las Vegas: IEEE, 2023. 431–435.
[doi: 10.1109/CCWC57344.2023.10099244]
[59] Zhang ZS, Wu YW, Zhou JR, Duan SF, Zhao H, Wang R. SG-Net: Syntax-guided machine reading comprehension. In: Proc. of the 34th
AAAI Conf. on Artificial Intelligence. New York: AAAI, 2020. 9636–9643. [doi: 10.1609/aaai.v34i05.6511]
[60] Rajpurkar P, Jia R, Liang P. Know what you don’t know: Unanswerable questions for SQuAD. In: Proc. of the 56th Annual Meeting of
the Association for Computational Linguistics. Melbourne: Association for Computational Linguistics, 2018. 784–789. [doi: 10.18653/v1/
P18-2124]
[61] He PC, Liu XD, Gao JF, Chen WZ. DeBERTa: Decoding-enhanced BERT with disentangled attention. arXiv:2006.03654, 2021.
[62] Dhingra B, Liu HX, Yang ZL, Cohen W, Salakhutdinov R. Gated-attention readers for text comprehension. In: Proc. of the 55th Annual
Meeting of the Association for Computational Linguistics. Vancouver: Association for Computational Linguistics, 2017. 1832–1846.
[doi: 10.18653/v1/P17-1168]
[63] Dara S, Srinivasulu CH, Babu CHM, Ravuri A, Paruchuri T, Kilak AS, Vidyarthi A. Context-aware auto-encoded graph neural model for
dynamic question generation using NLP. ACM Trans. on Asian and Low-resource Language Information Processing, 2023. [doi: 10.1145/
3626317]
[64] Seo MJ, Kembhavi A, Farhadi A, Hajishirzi H. Bidirectional attention flow for machine comprehension. arXiv:1611.01603, 2018.
[65] Zhao RC, Li XX, Joty S, Qin CW, Bing LD. Verify-and-edit: A knowledge-enhanced chain-of-thought framework. In: Proc. of the 61st

