Page 151 - 《软件学报》2025年第4期
P. 151
软件学报 ISSN 1000-9825, CODEN RUXUEW E-mail: jos@iscas.ac.cn
2025,36(4):1557−1569 [doi: 10.13328/j.cnki.jos.007189] [CSTR: 32375.14.jos.007189] http://www.jos.org.cn
©中国科学院软件研究所版权所有. Tel: +86-10-62562563
*
基于槽依赖建模的跨领域槽填充方法
王 泽, 周夏冰, 鞠 鑫, 王中卿, 周国栋
(苏州大学 计算机科学与技术学院, 江苏 苏州 215006)
通信作者: 周夏冰, E-mail: zhouxiabing@suda.edu.cn
摘 要: 作为任务型对话系统的一个核心部分, 槽填充任务通过识别话语中存在的特定槽实体来服务于后续的下
游任务. 但是, 针对一个特定领域, 需要大量有标记的数据作为支撑, 收集成本较高. 在此背景下, 跨领域槽填充任
务出现, 该任务通过迁移学习的方式高效地解决了数据稀缺问题. 已有的跨领域槽填充方法都忽视了槽类型之间
在话语中存在的依赖, 导致现有的模型在迁移到新领域时性能欠佳. 为了弥补这个缺陷, 提出基于槽依赖建模的跨
领域槽填充方法. 基于生成式预训练模型的提示学习方法, 设计一种融入槽依赖信息的提示模板, 该模板建立了不
同槽类型之间的隐式依赖关系, 充分挖掘预训练模型的实体预测性能. 此外, 为了进一步提高槽类型和槽实体与话
语文本之间的语义依赖, 增加了话语填充子任务, 通过反向填充的方式增强话语与槽实体的内在联系. 通过对多个
领域的迁移实验表明, 所提模型在零样本和少样本的设置上取得了较大的性能提升. 此外, 对模型中的主要结构进
行了详细地分析和消融实验.
关键词: 槽填充; 对话系统; 提示学习
中图法分类号: TP18
中文引用格式: 王泽, 周夏冰, 鞠鑫, 王中卿, 周国栋. 基于槽依赖建模的跨领域槽填充方法. 软件学报, 2025, 36(4): 1557–1569.
http://www.jos.org.cn/1000-9825/7189.htm
英文引用格式: Wang Z, Zhou XB, Ju X, Wang ZQ, Zhou GD. Slot Dependency Modeling for Cross-domain Slot Filling. Ruan Jian
Xue Bao/Journal of Software, 2025, 36(4): 1557–1569 (in Chinese). http://www.jos.org.cn/1000-9825/7189.htm
Slot Dependency Modeling for Cross-domain Slot Filling
WANG Ze, ZHOU Xia-Bing, JU Xin, WANG Zhong-Qing, ZHOU Guo-Dong
(School of Computer Science and Technology, Soochow University, Suzhou 215006, China)
Abstract: This study considers slot filling as a crucial component of task-oriented dialogue systems, which serves downstream tasks by
identifying specific slot entities in utterances. However, in a specific domain, it necessitates a large amount of labeled data, which is costly
to collect. In this context, cross-domain slot filling emerges and efficiently addresses the issue of data scarcity through transfer learning.
However, existing methods overlook the dependencies between slot types in utterances, leading to the suboptimal performance of existing
models when transferring to new domains. To address this issue, a cross-domain slot filling method based on slot dependency modeling is
proposed in this study. Leveraging the prompt learning approach based on generative pre-trained models, a prompt template integrating slot
dependency information is designed, establishing implicit dependency relationships between different slot types and fully exploiting the
predictive performance of slot entities in the pre-trained model. Furthermore, to enhance the semantic dependencies between slot types, slot
entities, and utterance texts, discourse filling subtask is introduced in this study to strengthen the inherent connections between utterances
and slot entities through reverse filling. Transfer experiments across multiple domains demonstrate significant performance improvements in
zero-shot and few-shot settings achieved by the proposed model. Additionally, a detailed analysis of the main structures in the model and
ablation experiments are conducted in this study to further validate the necessity of each part of the model.
Key words: slot filling; dialogue system; prompt learning
* 基金项目: 国家自然科学基金 (62176174, 61936010)
收稿时间: 2023-11-09; 修改时间: 2024-01-08, 2024-03-13; 采用时间: 2024-03-22; jos 在线出版时间: 2024-06-14
CNKI 网络首发时间: 2024-06-21