Page 87 - 《软件学报》2024年第4期
P. 87

王帆  等:  局部一致性主动学习的源域无关开集域自适应                                                     1665


         [21]    Yuan Y, Chung SW, Kang HG. Gradient-based active learning query strategy for end-to-end speech recognition. In: Proc. of the
             IEEE Int’l Conf. on Acoustics, Speech and Signal Processing (ICASSP). Brighton: IEEE, 2019. 2832−2836.
         [22]    Yang JF, Peng XY, Wang K, et al. Divide to adapt: Mitigating confirmation bias for domain adaptation of black-box predictors.
             arXiv:2205.14467, 2022.
         [23]    Tian Q, Ma C, Zhang FY, et al. Source-free unsupervised domain adaptation with sample transport learning. Computer Science and
             Technology, 2021, 36(3): 606−616.
         [24]    Wang D, Shang Y. A new active labeling method for deep learning. In: Proc. of the Int’l joint Conf. on Neural Networks (IJCNN).
             Beijing: IEEE, 2014. 112−119.
         [25]    He T, Jin XM, Ding GG, et al. Towards better uncertainty sampling: Active learning with multiple views for deep convolutional
             neural network. In: Proc. of the IEEE Int’l Conf. on Multimedia and Expo (ICME). Shanghai: IEEE, 2019. 1360−1365.
         [26]    Sener  O,  Savarese  S. Active learning for convolutional neural networks:  A  core-set approach.  In:  Proc. of the  Int’l  Conf.  on
             Learning Representations (ICLR). Vancouver: Openreview.net, 2018.
         [27]    Long MS, Cao ZJ, Wang JM. Learning transferable features with deep adaptation networks. In: Proc. of the Int’l Conf. on Machine
             Learning (ICML). Lille: JMLR.org, 2015. 97−105.
         [28]    Long MS, Zhu H, Wang JM, et al. Deep transfer learning with joint adaptation networks. In: Proc. of the Int’l Conf. on Machine
             Learning (ICML). Sydney: PMLR, 2017. 2208−2217.
         [29]    Long MS, Cao ZJ,  Wang JM,  et  al. Conditional  adversarial domain  adaptation.  In:  Advances in  Neural Information  Processing
             Systems, Vol.31. Montreal, 2018.
         [30]    Ganin  Y,  Lempitsky  V.  Unsupervised domain adaptation by backpropagation.  In: Proc.  of the  Int’l  Conf.  on  Machine Learning
             (ICML). Lille: JMLR.org, 2015. 1180−1189.
         [31]    Panareda Busto P, Gall J. Open set domain adaptation. In: Proc. of the IEEE Int’l Conf. on Computer Vision (ICCV). Venice: IEEE
             Computer Society, 2017. 754−763.
         [32]    Saito K, Yamamoto S,  Ushiku  Y,  et  al. Open set domain adaptation  by  backpropagation.  In:  Proc. of the  European  Conf.  on
             Computer Vision (ECCV). Munich: Springer, 2018. 153−168.
         [33]    Busto  PP,  Iqbal  A,  Gall  J. Open set  domain adaptation for image and action recognition.  IEEE  Trans.  on  Pattern Analysis  and
             Machine Intelligence, 2018, 42(2): 413−429.
         [34]    Jiang P, Wu AM, Han YH, et al. Bidirectional adversarial training for semi-supervised domain adaptation. In: Proc. of the 29th Int’l
             Conf. on Int’l Joint Conf. on Artificial Intelligence. ijcai.org, 2020. 934−940.
         [35]    Kim T, Kim C. Attract, perturb, and explore: Learning a feature alignment network for semi-supervised domain adaptation. In: Proc.
             of the European Conf. on Computer Vision (ECCV). Glasgow: Springer, 2020. 591−607.
         [36]    Fu B, Cao ZJ,  Wang JM,  et  al. Transferable query selection for active domain adaptation. In:  Proc. of the  IEEE/CVF  Conf.  on
             Computer Vision and Pattern Recognition (CVPR). Virtual: IEEE Computer Society, 2021. 7272−7281.
         [37]    Xie BH, Yuan LH, Li SL, et al. Active learning for domain adaptation: An energy-based approach. In: Proc. of the AAAI Conf. on
             Artificial Intelligence. Virtual: AAAI, 2022. 8708−8716.
         [38]    Fu B, Cao  ZJ,  Long MS.  Learning to detect public classes for universal  domain adaptation. In: Proc.  of the European Conf. on
             Computer Vision (ECCV). Glasgow: Springer, 2020. 567−583.
         [39]    Prabhu V, Chandrasekaran A, Saenko K, et al. Active domain adaptation via clustering uncertainty-weighted embeddings. In: Proc.
             of the IEEE/CVF Int’l Conf. on Computer Vision (ICCV). Monteral: IEEE, 2021. 8505−8514.
         [40]    Ding YH, Sheng LJ, Liang J, et al. ProxyMix: Proxy-based mixup training with label refinery for source-free domain adaptation.
             Neural Networks, 2023.
         [41]    Yang SQ, van de Weijer J, Herranz L, et al. Exploiting the intrinsic neighborhood structure for source-free domain adaptation. In:
             Advances in Neural Information Processing Systems. Virtual, 2021. 29393−29405.
         [42]    Yang SQ, Wang YX, van de Weijer J, et al. Generalized source-free domain adaptation. In: Proc. of the IEEE/CVF Int’l Conf. on
             Computer Vision (ICCV). Monteral: IEEE, 2021. 8978−8987.
         [43]    Liang  J, Hu DP, Feng JS. Do we really need to access the  source data?  Source  hypothesis transfer for unsupervised  domain
             adaptation. In: Proc. of the Int’l Conf. on Machine Learning (ICML). Virtual: PMLR, 2020. 6028−6039.
         [44]    Saenko K, Kulis B, Fritz M, et al. Adapting visual category models to new domains. In: Proc. of the European Conf. on Computer
             Vision (ECCV). Grete: Springer, 2010. 213−226.
         [45]    Venkateswara  H, Eusebio  J,  Chakraborty  S,  et  al. Deep  Hashing  network for  unsupervised domain adaptation.  In:  Proc.  of the
             IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR). Honolulu: IEEE Computer Society, 2017. 5018−5027.
   82   83   84   85   86   87   88   89   90   91   92