Page 319 - 《软件学报》2024年第4期
P. 319

张启辰 等: 一种基于窗口机制的口语理解异构图网络                                                       1897


                     the 57th Annual Meeting of the Association for Computational Linguistics. Florence: Association for Computational Linguistics, 2019.
                     5467–5471. [doi: 10.18653/v1/P19-1544]
                  [5]  Ramshaw  LA,  Marcus  MP.  Text  chunking  using  transformation-based  learning.  In:  Armstrong  S,  Church  K,  Isabelle  P,  Manzi  S,
                     Tzoukermann E, Yarowsky D, eds. Natural Language Processing Using Very Large Corpora. Dordrecht: Springer, 1999. 157–176. [doi:
                     10.1007/978-94-017-2390-9_10]
                  [6]  Qin LB, Liu TL, Che WX, Kang BB, Zhao SD, Liu T. A co-interactive Transformer for joint slot filling and intent detection. In: Proc. of
                     the 2021 IEEE Int’l Conf. on Acoustics, Speech and Signal Processing (ICASSP). Toronto: IEEE, 2021. 8193–8197. [doi: 10.1109/
                     ICASSP39728.2021.9414110]
                  [7]  Zhang LH, Ma DH, Zhang XD, Yan XH, Wang HF. Graph LSTM with context-gated mechanism for spoken language understanding. In:
                     Proc. of the 34th AAAI Conf. on Artificial Intelligence. New York: AAAI Press, 2020. 9539–9546. [doi: 10.1609/aaai.v34i05.6499]
                  [8]  Wu D, Ding L, Lu F, Xie J. SlotRefine: A fast non-autoregressive model for joint intent detection and slot filling. In: Proc. of the 2020
                     Conf. on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, 2020. 1932–1937.
                     [doi: 10.18653/v1/2020.emnlp-main.152]
                  [9]  Liu  YJ,  Meng  FD,  Zhang  JC,  Zhou  J,  Chen  YF,  Xu  JN.  CM-Net:  A  novel  collaborative  memory  network  for  spoken  language
                     understanding. In: Proc. of the 2019 Conf. on Empirical Methods in Natural Language Processing and the 9th Int’l Joint Conf. on Natural
                     Language Processing (EMNLP-IJCNLP). Hong Kong: Association for Computational Linguistics, 2019. 1051–1060. [doi: 10.18653/v1/
                     D19-1097]
                 [10]  Qin LB, Xie TB, Che WX, Liu T. A survey on spoken language understanding: Recent advances and new frontiers. In: Proc. of the 30th
                     Int’l Joint Conf. on Artificial Intelligence. Montreal: IJCAI.org, 2021. 4577–4584.
                 [11]  Wang X, Ji Hy, Shi C, Wang B, Ye YF, Cui P, Yu PS. Heterogeneous graph attention network. In: Proc. of the 2019 World Wide Web
                     Conf. San Francisco: Association for Computing Machinery, 2019. 2022–2032. [doi: 10.1145/3308558.3313562]
                 [12]  Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I. Attention is all you need. In: Proc. of the
                     31st Int’l Conf. on Neural Information Processing Systems. Long Beach: Curran Associates Inc., 2017. 6000–6010
                 [13]  Veličković  P,  Cucurull  G,  Casanova  A,  Romero  A,  Liò  P,  Bengio  Y.  Graph  attention  networks.  In:  Proc.  of  the  6th  Int ’l  Conf.  on
                     Learning Representations. Vancouver: ICIR, 2018.
                 [14]  Shi C, Li YT, Zhang JW, Sun YZ, Yu PS. A survey of heterogeneous information network analysis. IEEE Trans. on Knowledge and Data
                     Engineering, 2017, 29(1): 17–37. [doi: 10.1109/TKDE.2016.2598561]
                 [15]  Hemphill CT, Godfrey JJ, Doddington GR. The ATIS spoken language systems pilot corpus. In: Proc. of the 1990 Workshop on Speech
                     and Natural Language. Hidden Valley: Association for Computational Linguistics, 1990. 96–101. [doi: 10.3115/116580.116613]
                 [16]  Coucke A, Saade A, Ball A, Bluche T, Caulier A, Leroy D, Doumouro C, Gisselbrecht T, Caltagirone F, Lavril T, Primet M, Dureau J.
                     Snips voice platform: An embedded spoken language understanding system for private-by-design voice interfaces. arXiv:1805.10190,
                     2018.
                 [17]  Haffner P, Tur G, Wright JH. Optimizing svms for complex call classification. In: Proc. of the 2003 IEEE Int’l Conf. on Acoustics,
                     Speech, and Signal Processing. Hong Kong: IEEE, 2003. I-632–I-635. [doi: 10.1109/ICASSP.2003.1198860]
                 [18]  Raymond C, Riccardi G. Generative and discriminative algorithms for spoken language understanding. In: Proc. of the 8th Interspeech
                     Annual Conf. of the Int’l Speech Communication Association. Anvers: HAL, 2007.
                 [19]  Deng L, Tur G, He XD, Hakkani-Tur D. Use of kernel deep convex networks and end-to-end learning for spoken language understanding.
                     In: Proc. of the 2012 IEEE Spoken Language Technology Workshop (SLT). Miami: IEEE, 2012. 210–215. [doi: 10.1109/SLT.2012.
                     6424224]
                 [20]  Tur G, Deng L, Hakkani-Tür D, He XD. Towards deeper understanding: Deep convex networks for semantic utterance classification. In:
                     Proc. of the 2012 IEEE Int’l Conf. on Acoustics, Speech and Signal Processing (ICASSP). Kyoto: IEEE, 2012. 5045–5048. [doi: 10.1109/
                     ICASSP.2012.6289054]
                 [21]  Ravuri S, Stolcke A. Recurrent neural network and lstm models for lexical utterance classification. In: Proc. of the 16th Annual Conf. of
                     the Int’l Speech Communication Association. Dresden: ISCA, 2015. 135–139. [doi: 10.21437/Interspeech.2015-42]
                 [22]  Hochreiter S, Schmidhuber J. Long short-term memory. Neural Computation, 1997, 9(8): 1735–1780. [doi: 10.1162/neco.1997.9.8.1735]
                 [23]  Wu CS, Hoi SCH, Socher R, Xiong CM. TOD-BERT: Pre-trained natural language understanding for task-oriented dialogue. In: Proc. of
                     the  2020  Conf.  on  Empirical  Methods  in  Natural  Language  Processing  (EMNLP).  Association  for  Computational  Linguistics,  2020.
                     917–929. [doi: 10.18653/v1/2020.emnlp-main.66]
                 [24]  Yao KS, Zweig G, Hwang MY, Shi YY, Yu D. Recurrent neural networks for language understanding. In: Proc. of the 2013 Interspeech.
                     Lyon: ISCA, 2013. 2524–2528.
   314   315   316   317   318   319   320   321   322   323   324