Page 286 - 《软件学报》2025年第5期
P. 286

2186                                                       软件学报  2025  年第  36  卷第  5  期


                     arXiv:2212.07249, 2024.
                 [10]  Wang B, Ju JZ, Fan Y, Dai XY, Huang SJ, Chen JJ. Structure-unified M-tree coding solver for math word problem. In: Proc. of the 2022
                     Conf. on Empirical Methods in Natural Language Processing. Abu Dhabi: Association for Computational Linguistics, 2022. 8122–8132.
                     [doi: 10.18653/v1/2022.emnlp-main.556]
                 [11]  Jie ZM, Li JR, Lu W. Learning to reason deductively: Math word problem solving as complex relation extraction. In: Proc. of the 60th
                     Annual  Meeting  of  the  Association  for  Computational  Linguistics  (Vol.  1:  Long  Papers).  Dublin:  Association  for  Computational
                     Linguistics, 2022. 5944–5955. [doi: 10.18653/v1/2022.acl-long.410]
                 [12]  Liang ZW, Zhang JP, Wang L, Qin W, Lan YS, Shao J, Zhang XL. MWP-BERT: Numeracy-augmented pre-training for math word
                     problem solving. In: Proc. of the 2022 Findings of the Association for Computational Linguistics: NAACL 2022. Seattle: Association for
                     Computational Linguistics, 2022. 997–1009. [doi: 10.18653/v1/2022.findings-naacl.74]
                 [13]  Wang Y, Liu XJ, Shi SM. Deep neural solver for math word problems. In: Proc. of the 2017 Conf. on Empirical Methods in Natural
                     Language Processing. Copenhagen: Association for Computational Linguistics, 2017. 845–854. [doi: 10.18653/v1/D17-1088]
                 [14]  Amini A, Gabriel S, Lin P, Koncel-Kedziorski R, Choi Y, Hajishirzi H. MathQA: Towards interpretable math word problem solving with
                     operation-based formalisms. arXiv:1905.13319, 2019.
                 [15]  Koncel-Kedziorski R, Roy S, Amini A, Kushman N, Hajishirzi H. MAWPS: A math word problem repository. In: Proc. of the 2016 Conf.
                     of  the  North  American  Chapter  of  the  Association  for  Computational  Linguistics:  Human  Language  Technologies.  San  Diego:
                     Association for Computational Linguistics, 2016. 1152–1157. [doi: 10.18653/v1/N16-1136]
                 [16]  Chen XL, Chiticariu L, Danilevsky M, Evfimievski A, Sen P. A rectangle mining method for understanding the semantics of financial
                     tables. In: Proc. of the 14th IAPR Int’l Conf. on Document Analysis and Recognition (ICDAR). Kyoto: IEEE, 2017. 268–273. [doi: 10.
                     1109/ICDAR.2017.52]
                 [17]  Chen T, Kornblith S, Norouzi M, Hinton G. A simple framework for contrastive learning of visual representations. In: Proc. of the 37th
                     Int’l Conf. on Machine Learning. JMLR.org, 2020. 1597–1607.
                 [18]  He KM, Fan HQ, Wu YX, Xie SN, Girshick R. Momentum contrast for unsupervised visual representation learning. In: Proc. of the 2020
                     IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR). Seattle: IEEE, 2020. 9726–9735. [doi: 10.1109/CVPR42600.
                     2020.00975]
                 [19]  Gao TY, Yao XC, Chen DQ. SimCSE: Simple contrastive learning of sentence embeddings. In: Proc. of the 2021 Conf. on Empirical
                     Methods in Natural Language Processing. Punta Cana: Association for Computational Linguistics, 2021. 6894–6910. [doi: 10.18653/v1/
                     2021.emnlp-main.552]
                 [20]  Luo Y, Guo F, Liu ZH, Zhang Y. Mere contrastive learning for cross-domain sentiment analysis. In: Proc. of the 29th Int’l Conf. on
                     Computational Linguistics. Gyeongju: Int’l Committee on Computational Linguistics, 2022. 7099–7111.
                 [21]  Yue ZR, Kratzwald B, Feuerriegel S. Contrastive domain adaptation for question answering using limited text corpora. In: Proc. of the
                     2021  Conf.  on  Empirical  Methods  in  Natural  Language  Processing.  Punta  Cana:  Association  for  Computational  Linguistics,  2021.
                     9575–9593. [doi: 10.18653/v1/2021.emnlp-main.754]
                 [22]  You CY, Chen N, Zou YX. Self-supervised contrastive cross-modality representation learning for spoken question answering. In: Proc. of
                     the  2021  Findings  of  the  Association  for  Computational  Linguistics:  EMNLP  2021.  Punta  Cana:  Association  for  Computational
                     Linguistics, 2021. 28–39. [doi: 10.18653/v1/2021.findings-emnlp.3]
                 [23]  Yang N, Wei FR, Jiao BX, Jiang DX, Yang LJ. xMoCo: Cross momentum contrastive learning for open-domain question answering. In:
                     Proc. of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Int’l Joint Conf. on Natural Language
                     Processing (Vol. 1: Long Papers). Association for Computational Linguistics, 2021. 6120–6129. [doi: 10.18653/v1/2021.acl-long.477]
                 [24]  van den Oord A, Li YZ, Vinyals O. Representation learning with contrastive predictive coding. arXiv:1807.03748, 2019.
                 [25]  Chen  YH,  Jia  YH,  Tan  CY,  Chen  WL,  Zhang  M.  Method  for  complex  question  answering  based  on  global  and  local  features  of
                     knowledge graph. Ruan Jian Xue Bao/Journal of Software, 2023, 34(12): 5614–5628 (in Chinese with English abstract). http://www.jos.
                     org.cn/1000-9825/6799.htm [doi: 10.13328/j.cnki.jos.006799]
                 [26]  Chen WH, Zha HW, Chen ZY, Xiong WH, Wang H, Wang W. HybridQA: A dataset of multi-hop question answering over tabular and
                     textual data. arXiv:2004.07347, 2021.
                 [27]  Li X, Sun YW, Cheng G. TSQA: Tabular scenario based question answering. In: Proc. of the 35th AAAI Conf. on Artificial Intelligence.
                     AAAI, 13297–13305. [doi: 10.1609/aaai.v35i15.17570]
                 [28]  Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of deep bidirectional Transformers for language understanding. In: Proc.
                     of the 2019 Conf. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Vol. 1:
                     Long and Short Papers). Minneapolis: Association for Computational Linguistics, 2019. 4171–4186. [doi: 10.18653/v1/N19-1423]
                 [29]  Beltagy I, Peters ME, Cohan A. Longformer: The long-document Transformer. arXiv:2004.05150, 2020.
   281   282   283   284   285   286   287   288   289   290   291