Page 77 - 《软件学报》2021年第10期
P. 77

徐东钦  等:基于多任务预训练的 AMR 文本生成研究                                                     3049


                [18]    Song L, Zhang Y, Wang Z, et al. A graph-to-sequence model for AMR-to-text generation. In: Proc. of the 56th Annual Meeting of
                     the Association for Computational Linguistics (Vol.1: Long Papers). 2018. 16161626.
                [19]    Beck D, Haffari G, Cohn T. Graph-to-sequence learning using gated graph neural networks. In: Proc. of the 56th Annual Meeting
                     of the Association for Computational Linguistics (Vol.1: Long Papers). 2018. 273283.
                [20]    Damonte M, Cohen SB. Structural neural encoders for AMR-to-text generation. In: Proc. of the 2019 Conf. of the North American
                     Chapter of the Association for Computational Linguistics: Human Language Technologies (Vol.1: Long and Short Papers). 2019:
                     36493658.
                [21]    Guo Z, Zhang Y, Teng Z, et al. Densely connected graph convolutional networks for graph-to-sequence learning. Trans. of the
                     Association for Computational Linguistics, 2019,7:297312.
                [22]    Ribeiro LFR, Gardent C, Gurevych I. Enhancing AMR-to-text generation with dual graph representations. In: Proc. of the 2019
                     Conf. on Empirical Methods in Natural Language Processing and the 9th Int’l Joint Conf. on Natural Language Processing. 2019.
                     31833194
                [23]    Cai D, Lam W. Graph transformer for graph-to-sequence learning. In: Proc. of the 34th AAAI Conf. on Artificial Intelligence. 2020.
                     74647471.
                [24]    Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. In: Proc. of the 31st Int’l Conf. on Neural Information Processing
                     Systems. 2017. 60006010.
                [25]    Zhao Y, Chen L, Chen Z, et al. Line graph enhanced AMR-to-text generation with mix-order graph attention networks. In: Proc. of
                     the 58th Annual Meeting of the Association for Computational Linguistics. 2020. 732741.
                [26]    Song L, Wang A, Su J, et al. Structural information preserving for graph-to-text generation. In: Proc. of the 58th Annual Meeting of
                     the Association for Computational Linguistics. 2020. 79877998.
                [27]    Mikolov T, Sutskever I, Chen K, et al. Distributed representations of words and phrases and their compositionality. In: Proc. of the
                     26th Int’l Conf. on Neural Information Processing Systems, Vol.2. 2013. 31113119.
                [28]    Pennington J, Socher R, Manning CD. Glove: Global vectors for word representation. In: Proc. of the 2014 Conf. on Empirical
                     Methods in Natural Language Processing. 2014. 15321543.
                [29]    McCann B, Bradbury J, Xiong C, et al. Learned in translation: Contextualized word vectors. In: Proc. of the 31st Int’l Conf. on
                     Neural Information Processing Systems. 2017. 62976308.
                [30]    Wang L, Zhao W, Jia R, et al. Denoising based sequence-to-sequence pre-training for text generation. In: Proc. of the 2019 Conf.
                     on Empirical Methods in Natural Language  Processing and the  9th Int’l Joint Conf.  on  Natural Language  Processing.  2019.
                     39944006.
                [31]    Song K, Tan X, Qin T, et al. MASS: Masked sequence to sequence pre-training for language generation. In: Proc. of the Int’l Conf.
                     on Machine Learning. 2019. 59265936.
                [32]    Lewis M, Liu Y, Goyal N, et al. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation,
                     and comprehension. In: Proc. of the 58th Annual Meeting of the Association for Computational Linguistics. 2020. 78717880.
                [33]    Mager M, Astudillo RF, Naseem T, et al. GPT-too: A language-model-first approach for AMR-to-Text generation. In: Proc. of the
                     58th Annual Meeting of the Association for Computational Linguistics. 2020. 18461852
                [34]    Harkous H, Groves I, Saffari A. Have your text and use it too! End-to-end neural data-to-text generation with semantic fidelity.
                     arXiv Preprint arXiv: 2004.06577, 2020.
                [35]    Sennrich R, Haddow B, Birch A. Neural machine translation  of  rare words with  subword units. In:  Proc.  of  the  54th Annual
                     Meeting of the Association for Computational Linguistics (Vol.1: Long Papers). 2016. 17151725.
                [36]    Hu J, Xia M, Neubig G, et al. Domain adaptation of neural machine translation by lexicon induction. In: Proc. of the 57th Annual
                     Meeting of the Association for Computational Linguistics. 2019. 29893001.
                [37]    Vincent P, Larochelle H, Lajoie I, et al. Stacked denoising autoencoders: Learning useful representations in a deep network with a
                     local denoising criterion. Journal of Machine Learning Research, 2010,11(12).
                [38]    Ge D, Li J, Zhu M, et al. Modeling source syntax and semantics for neural AMR parsing. In: Proc. of the 28th Int’l Joint Conf. on
                     Artificial Intelligence. AAAI, 2019. 49754981.
   72   73   74   75   76   77   78   79   80   81   82