Page 118 - 《软件学报》2020年第12期
P. 118

3784                                Journal of Software  软件学报 Vol.31, No.12, December 2020

         语块和类型的中文蕴含语块-类型识别任务,探索了中文文本蕴含识别在新的任务形式上的可能性.实验结果表
         明:蕴含识别相关任务可以在基于大规模预训练数据的 BERT 模型上共享语义知识,有效预测句对中的符合蕴
         含现象的语块及其位置信息.该实验为小规模数据集上的中文文本蕴含识别任务提供了可靠的基线.
             本文工作仍有待改进的地方.中文蕴含语块-类型识别有 17 个预测标签,每个标签需要同时预测语块位置
         信息和关系类型.标签数量多,内容复杂,预测结果比单纯预测类型的中文蕴含类型识别任务要低.在分析了蕴
         含类型识别实验结果后,我们发现模型难以学习蕴含数据中的近义关系,这启发我们在未来可以将外部知识加
         入模型中,提高预测准确率.词汇、句法结构等底层特征作为重要的模型输入,将会对模型性能产生重要影响,这
         些特征也将成为我们未来研究的重点关注对象.另一方面,中文蕴含类型和英文蕴含类型有部分重合,我们希望
         标注部分英文蕴含数据,做一组中英文蕴含识别的对比实验,比较深度学习模型在本文 3 个任务上的结果是否
         有差别.

         References:
          [1]    Guo MS, Zhang Y, Liu T. Research advances and prospect of recognizing textual entailment and knowledge acquisition. Chinese
             Journal of Computers,  2017,40(4):889−910  (in Chinese with English abstract).  http://cjc.ict.ac.cn/online/onlinepaper/gms-2017
             45180721.pdf [doi: 10.11897/SP.J.1016.2017.00889]
          [2]    Li JM. An overview of the research on prefabricated chunks home and abroad. Shandong Foreign Language Teaching Journal, 2011,
             32(5):17−23 (in Chinese with English abstract).
          [3]    Skehan P. A Cognitive Approach to Language Learning. Oxford: Oxford University Press, 1998.
          [4]    Wray A. Formulaic Language and the Lexicon. Cambridge: Cambridge University Press, 2005.
          [5]    Russell B. Introduction to Mathematical Philosophy. North Chelmsford: Courier Corporation, 1993.
          [6]    Flew A. A Dictionary of Philosophy. London: Pan Book Ltd., 1979.
          [7]    Bowman SR, Angeli G, Potts C, et al. A large annotated corpus for learning natural language inference. arXiv preprint arXiv: 1508.
             05326, 2015.
          [8]    Rocktäschel T, Grefenstette E, Hermann KM, et al. Reasoning about entailment with neural attention. arXiv preprint arXiv: 1509.
             06664, 2015.
          [9]    Liu Y, Sun C, Lin L, et al. Learning natural language inference using bidirectional LSTM model and inner-attention. arXiv preprint
             arXiv:1605.09090, 2016.
         [10]    Sammons  M, Vydiswaran VGV, Vieira T,  et  al.  Relation  alignment for textual  entailment recognition. In:  Proc. of the  Text
             Analysis Conf. (TAC). 2009.
         [11]    Chen Q, Zhu X, Ling Z, et al. Enhanced lstm for natural language inference. arXiv preprint arXiv:1609.06038, 2016.
         [12]    Devlin J, Chang MW, Lee K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint
             arXiv:1810.04805, 2018.
         [13]    Dagan I, Glickman O. Probabilistic textual entailment: Generic applied modeling of language variability. In: Proc. of the PASCAL
             Workshop on Learning Methods for Text Understanding and Mining. 2004. 26−29.
         [14]    Dagan I, Glickman O, Magnini B. The PASCAL recognising textual entailment challenge. In: Quiñonero-Candela, Joaquin, et al.,
             eds. Proc. of the Int’l Conf. on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and
             Recognizing Textual Entailment. Springer-Verlag, 2005. 177−190.
         [15]    Bar-Haim R, Dagan  I,  Dolan B,  et  al.  The 2nd Pascal  recognising textual entailment  challenge. In: Proc. of  the 2nd PASCAL
             Challenges Workshop on Recognising Textual Entailment. 2006,6(1):6.4.
         [16]    Giampiccolo D, Magnini B, Dagan I, et al. The 3rd Pascal recognizing textual entailment challenge. In: Proc. of the ACL-PASCAL
             Workshop on Textual Entailment and Paraphrasing. Association for Computational Linguistics, 2007. 1−9.
         [17]    Shima H, Kanayama H, Lee CW, et al. Overview of NTCIR-9 RITE: Recognizing inference in text. In: Proc. of the 9th NII Test
             Collection for Information Retrieval Workshop. 2011. 291−301.
         [18]    Watanabe Y, Miyao Y, Mizuno J, et al. Overview of the recognizing inference in text (RITE-2) at NTCIR-10. In: Proc. of the 10th
             NII Test Collection for Information Retrieval Workshop. 2013. 385−404.
   113   114   115   116   117   118   119   120   121   122   123