Page 320 - 《软件学报》2024年第4期
P. 320
1898 软件学报 2024 年第 35 卷第 4 期
[25] Mesnil G, He XD, Deng L, Bengio Y. Investigation of recurrent-neural-network architectures and learning methods for spoken language
understanding. In: Proc. of the 2013 Interspeech. Lyon: ISCA, 2013: 3771–3775.
[26] Mesnil G, Dauphin Y, Yao KS, Bengio Y, Deng L, Hakkani-Tur D, He XD, Heck L, Tur G, Yu D, Zweig G. Using recurrent neural
networks for slot filling in spoken language understanding. IEEE/ACM Trans. on Audio, Speech, and Language Processing, 2015, 23(3):
530–539. [doi: 10.1109/TASLP.2014.2383614]
[27] Coope S, Farghly T, Gerz D, et al. Span-ConveRT: Few-shot span extraction for dialog with pretrained conversational representations. In:
Proc. of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2020.
107–121. [doi: 10.18653/v1/2020.acl-main.11]
[28] Zhang XD, Wang HF. A joint model of intent determination and slot filling for spoken language understanding. In: Proc. of the 25th Int’l
Joint Conf. on Artificial Intelligence. New York: AAAI Press, 2016. 2993–2999.
[29] Liu B, Lane I. Attention-based recurrent neural network models for joint intent detection and slot filling. In: Proc. of the 2016 Interspeech.
San Francisco: ISCA, 2016. 685–689. [doi: 10.21437/Interspeech.2016-1352]
[30] Goo CW, Gao G, Hsu YK, Huo CL, Chen TC, Hsu KW, Chen YN. Slot-gated modeling for joint slot filling and intent prediction. In:
Proc. of the 2018 Conf. of the North American Chapter of the Association for Computational Linguistics: Human Language
Technologies, Vol. 2 (Short Papers). New Orleans: Association for Computational Linguistics, 2018. 753–757. [doi: 10.18653/v1/N18-
2118]
[31] Li CL, Li L, Qi J. A self-attentive model with gate mechanism for spoken language understanding. In: Proc. of the 2018 Conf. on
Empirical Methods in Natural Language Processing. Brussels: Association for Computational Linguistics, 2018. 3824–3833. [doi: 10.
18653/v1/D18-1417]
[32] Qin LB, Che WX, Li YM, Wen HY, Liu T. A stack-propagation framework with token-level intent detection for spoken language
understanding. In: Proc. of the 2019 Conf. on Empirical Methods in Natural Language Processing and the 9th Int’l Joint Conf. on Natural
Language Processing (EMNLP-IJCNLP). Hong Kong: Association for Computational Linguistics, 2019. 2078–2087. [doi: 10.18653/v1/
D19-1214]
[33] Wang JX, Wei K, Radfar M, Zhang WW, Chung C. Encoding syntactic knowledge in transformer encoder for intent detection and slot
filling. In: Proc. of the 35th AAAI Conf. on Artificial Intelligence. Palo Alto: AAAI Press, 2021. 13943–13951.
[34] Kim Y. Convolutional neural networks for sentence classification. In: Proc. of the 2014 Conf. on Empirical Methods in Natural Language
Processing (EMNLP). Doha: Association for Computational Linguistics, 2014. 1746–1751. [doi: 10.3115/v1/D14-1181]
[35] Kingma DP, Ba J. Adam: A method for stochastic optimization. In: Proc. of the 3rd Int’l Conf. on Learning Representations. San Diego:
ICLR, 2015.
[36] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proc.
of the 2019 Conf. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol.
1 (Long and Short Papers). Minneapolis: Association for Computational Linguistics, 2018. 4171–7186. [doi: 10.18653/v1/N19-1423]
张启辰(1993-), 男, 博士生, 主要研究领域为自 李静梅(1964-), 女, 博士, 教授, 博士生导师, 主
然语言处理, 对话系统. 要研究领域为自然语言处理, 大数据, 云计算.
王帅(1998-), 男, 硕士生, 主要研究领域为自然
语言处理, 对话系统.