Page 346 - 《软件学报》2025年第7期
P. 346
曹艺 等: 融合扩增技术的无监督域适应方法 3267
正则化方法则使扩增后的标注样本与原始样本的输出保持一致, 提高目标域上的分类效果. 在多个无监督域适应
任务上对所提出方法进行了比较, 实验结果验证了所提出的方法可以获得更优的适应性能. 尽管 A-UDA 方法在
多个域适应数据集中都有不错的表现, 但它仍存在一些提升空间. 在今后的工作中, 将进一步从目标域无标记数据
信息的角度思考, 考虑伪标签等因素对模型的影响, 以提升模型的准确率和鲁棒性. 同时将进一步探究不同距离分
布度量对域适应结果的影响.
虽然本文所提出方法仅在多个不同的手写数字识别和视觉对象分类任务上进行了验证, 实验结果也说明了 A-UDA
方法在以图像作为输入数据形式的应用任务中可以取得良好的分类效果. 然而, A-UDA 方法不仅仅适用于图像这
一数据形式, 也可以应用于文本、生物以及时序等其他非图像数据的分类、识别、回归等任务中. 例如: 本文采用
了随机扩增技术来增加目标域中样本的数量, 从而计算一致性正则化损失, 而在文本分类、蛋白质结构预测任务
中, 可以采用生成对抗网络、大型语言或蛋白质结构预测模型所生成的目标域样本来完成一致性正则化损失的计
算过程. 同样, 对于所添加伪标记的置信度计算, 也可以扩展到此类序列或结构化数据标签的置信度计算过程中,
从而, 完成这些非图像数据上的无监督域适应任务. 本文未来也将进一步完善 A-UDA 方法, 针对这些应用领域中
数据的特点. 即通过改进 A-UDA 方法中样本扩增和标记扩增部分, 使其在这些任务中具有更好的性能提升, 在多
种场景下均具有良好的扩展性.
References:
[1] Zhuo JB, Su C, Wang SH, Huang QM. Min-entropy transfer adversarial hashing. Journal of Computer Research and Development, 2020,
57(4): 888–896 (in Chinese with English abstract). [doi: 10.7544/issn1000-1239.2020.20190476]
[2] Ozyurt Y, Feuerriegel S, Zhang C. Contrastive learning for unsupervised domain adaptation of time series. In: Proc. of the 11th Int’l
Conf. on Learning Representations. Kigali: OpenReview.net, 2023. 1–40.
[3] Ganin Y, Ustinova E, Ajakan H, Germain P, Larochelle H, Laviolette F, Marchand M, Lempitsky V. Domain-adversarial training of
neural networks. The Journal of Machine Learning Research, 2016, 17(1): 2096–2030.
[4] Dinu MC, Holzleitner M, Beck M, Nguyen HD, Huber A, Eghbal-Zadeh H, Moser BA, Pereverzyev SV, Hochreiter S, Zellinger W.
Addressing parameter choice issues in unsupervised domain adaptation by aggregation. In: Proc. of the 11th Int’l Conf. on Learning
Representations. Kigali: OpenReview.net, 2023. 1–51.
[5] Kim M, Li D, Hospedales T. Domain generalisation via domain adaptation: An adversarial Fourier amplitude approach. In: Proc. of the
11th Int’l Conf. on Learning Representations. Kigali: OpenReview.net, 2023. 1–21.
[6] Saito K, Kim D, Sclaroff S, Darrell T, Saenko K. Semi-supervised domain adaptation via minimax entropy. In: Proc. of the 2019 IEEE Int’l
Conf. on Computer Vision. Seoul: IEEE, 2019. 8049–8057. [doi: 10.1109/ICCV.2019.00814]
[7] Jiang P, Wu AM, Han YH, Shao YF, Qi MY, Li BS. Bidirectional adversarial training for semi-supervised domain adaptation. In: Proc. of
the 29th Int’l Joint Conf. on Artificial Intelligence. ijcai.org, 2020. 130.
[8] Li D, Hospedales T. Online meta-learning for multi-source and semi-supervised domain adaptation. In: Proc. of the 16th European Conf.
on Computer Vision. Glasgow: Springer, 2020. 382–403.
[9] Long MS, Cao Y, Wang JM, Jordan MI. Learning transferable features with deep adaptation networks. In: Proc. of the 32nd Int’l Conf. on
Machine Learning. Lille: JMLR.org, 2015. 97–105.
[10] Tzeng E, Hoffman J, Zhang N, Saenko K, Darrell T. Deep domain confusion: Maximizing for domain invariance. arXiv:1412.3474, 2014.
[11] Sohn K, Shang WL, Yu X, Chandraker M. Unsupervised domain adaptation for distance metric learning. In: Proc. of the 7th Int’l Conf.
on Learning Representations. New Orleans: OpenReview.net, 2019. 1–18.
[12] Zhang YB, Tang H, Jia K, Tan MK. Domain-symmetric networks for adversarial domain adaptation. In: Proc. of the 2019 IEEE/CVF
Conf. on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 5026–5035. [doi: 10.1109/CVPR.2019.00517]
[13] Li XD, Hu Y, Zheng JH, Li MT, Ma WZ. Central moment discrepancy based domain adaptation for intelligent bearing fault diagnosis.
Neurocomputing, 2021, 429: 12–24. [doi: 10.1016/j.neucom.2020.11.063]
[14] Peng XC, Bai QX, Xia XD, Huang ZJ, Saenko K, Wang B. Moment matching for multi-source domain adaptation. In: Proc. of the 2019
IEEE/CVF Int’l Conf. on Computer Vision. Seoul: IEEE, 2019. 1406–1415. [doi: 10.1109/ICCV.2019.00149]
[15] Sun BC, Saenko K. Deep CORAL: Correlation alignment for deep domain adaptation. In: Proc. of the 2016 European Conf. on Computer
Vision. Amsterdam: Springer, 2016. 443–450. [doi: 10.1007/978-3-319-49409-8_35]
[16] Peng XC, Saenko K. Synthetic to real adaptation with generative correlation alignment networks. In: Proc. of the 18th IEEE Winter Conf.

