Page 347 - 《软件学报》2025年第7期
P. 347
3268 软件学报 2025 年第 36 卷第 7 期
on Applications of Computer Vision. Lake Tahoe: IEEE, 2018. 1982–1991. [doi: 10.1109/WACV.2018.00219]
[17] Long MS, Cao ZJ, Wang JM, Jordan MI. Conditional adversarial domain adaptation. In: Proc. of the 32nd Int’l Conf. on Neural
Information Processing Systems. Red Hook: Curran Associates Inc., 2018. 1647–1657.
[18] Cui SH, Wang SH, Zhuo JB, Su C, Huang QM, Tian Q. Gradually vanishing bridge for adversarial domain adaptation. In: Proc. of the
2020 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020. 12452–12461.
[19] Wang XM, Li L, Ye WR, Long MS, Wang JM. Transferable attention for domain adaptation. In: Proc. of the 33rd AAAI Conf. on
Artificial Intelligence. AAAI Press, 2019. 655. [doi: 10.1609/aaai.v33i01.33015345]
[20] Matsuura T, Harada T. Domain generalization using a mixture of multiple latent domains. In: Proc. of the 34th AAAI Conf. on Artificial
Intelligence. New York: AAAI Press, 2020. 11749–11756. [doi: 10.1609/aaai.v34i07.6846]
[21] Wei YY, Zhang Z, Wang Y, Xu ML, Yang Y, Yan SC, Wang M. DerainCycleGAN: Rain attentive CycleGAN for single image deraining
and rainmaking. IEEE Trans. on Image Processing, 2021, 30: 4788–4801. [doi: 10.1109/TIP.2021.3074804]
[22] Gao R, Hou XS, Qin J, Chen JX, Liu L, Zhu F, Zhang Z, Shao L. Zero-VAE-GAN: Generating unseen features for generalized and
transductive zero-shot learning. IEEE Trans. on Image Processing, 2020, 29: 3665–3680. [doi: 10.1109/TIP.2020.2964429]
[23] Gao XJ, Zhang Z, Mu TT, Zhang XD, Cui CR, Wang M. Self-attention driven adversarial similarity learning network. Pattern
Recognition, 2020, 105: 107331. [doi: 10.1016/j.patcog.2020.107331]
[24] Pei ZY, Cao ZJ, Long MS, Wang JM. Multi-adversarial domain adaptation. In: Proc. of the 32nd Conf. on Artificial Intelligence. New
Orleans: AAAI Press, 2018. 3934–3941. [doi: 10.1609/aaai.v32i1.11767]
[25] Zhang WC, Xu D, Ouyang WL, Li W. Self-paced collaborative and adversarial network for unsupervised domain adaptation. IEEE Trans.
on Pattern Analysis and Machine Intelligence, 2021, 43(6): 2047–2061. [doi: 10.1109/TPAMI.2019.2962476]
[26] Long MS, Zhu H, Wang JM, Jordan MI. Deep transfer learning with joint adaptation networks. In: Proc. of the 34th Int’l Conf. on
Machine Learning. Sydney: PMLR, 2017. 2208–2217.
[27] Kang GL, Jiang L, Yang Y, Hauptmann AG. Contrastive adaptation network for unsupervised domain adaptation. In: Proc. of the 2019
IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 4888–4897. [doi: 10.1109/CVPR.2019.00503]
[28] Chen MH, Zhao S, Liu HF, Cai D. Adversarial-learned loss for domain adaptation. In: Proc. of the 34th Conf. on Artificial Intelligence.
New York: AAAI Press, 2020. 3521–3528. [doi: 10.1609/aaai.v34i04.5757]
[29] Saito K, Ushiku Y, Harada T. Asymmetric tri-training for unsupervised domain adaptation. In: Proc. of the 34th Int’l Conf. on Machine
Learning. Sydney: JMLR.org, 2017. 2988–2997.
[30] Xie SA, Zheng ZB, Chen L, Chen C. Learning semantic representations for unsupervised domain adaptation. In: Proc. of the 35th Int’l
Conf. on Machine Learning. Stockholm: PMLR, 2018. 5419–5428.
[31] Chen CQ, Xie WP, Huang WB, Rong Y, Ding XH, Huang Y, Xu TY, Huang JZ. Progressive feature alignment for unsupervised domain
adaptation. In: Proc. of the 2019 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 627–636. [doi:
10.1109/CVPR.2019.00072]
[32] Pan YW, Yao T, Li YH, Wang Y, Ngo CW, Mei T. Transferrable prototypical networks for unsupervised domain adaptation. In: Proc. of
the 2019 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 2234–2242. [doi: 10.1109/CVPR.
2019.00234]
[33] Zou Y, Yu ZD, Kumar BVK, Wang JS. Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In:
Proc. of the 15th European Conf. on Computer Vision. Munich: Springer, 2018. 297–313. [doi: 10.1007/978-3-030-01219-9_18]
[34] Wang Q, Breckon T. Unsupervised domain adaptation via structured prediction based selective pseudo-labeling. In: Proc. of the 34th
Conf. on Artificial Intelligence. New York: AAAI Press, 2020. 6243–6250. [doi: 10.1609/aaai.v34i04.6091]
[35] Patel VM, Gopalan R, Li RN, Chellappa R. Visual domain adaptation: A survey of recent advances. IEEE Signal Processing Magazine,
2015, 32(3): 53–69. [doi: 10.1109/MSP.2014.2347059]
[36] Rahman MM, Fookes C, Baktashmotlagh M, Sridharan S. On minimum discrepancy estimation for deep domain adaptation. In: Singh R,
Vatsa M, Patel V, Ratha N, eds. Domain Adaptation for Visual Understanding. Cham: Springer, 2020. 81–94. [doi: 10.1007/978-3-030-
30671-7_6]
[37] Morerio P, Cavazza J, Murino V. Minimal-entropy correlation alignment for unsupervised deep domain adaptation. In: Proc. of the 6th Int’l
Conf. on Learning Representations. Vancouver: OpenReview.net, 2018.
[38] Zhuang FZ, Cheng XH, Luo P, Pan SJ, He Q. Supervised representation learning: Transfer learning with deep autoencoders. In: Proc. of
the 24th Int’l Joint Conf. on Artificial Intelligence. Buenos: AAAI Press, 2015. 4119–4125.
[39] Zhang Y, Wang NB, Cai SB, Song L. Unsupervised domain adaptation by mapped correlation alignment. IEEE Access, 2018, 6:
44698–44706. [doi: 10.1109/access.2018.2865249]

