Page 200 - 《软件学报》2026年第1期
P. 200
刘子扬 等: 图对比学习方法综述 197
IEEE Trans. on Pattern Analysis and Machine Intelligence, 2023, 45(3): 3311–3328. [doi: 10.1109/TPAMI.2022.3186752]
[31] Zheng ZQ, Bin Y, Lü XO, Wu Y, Yang Y, Shen HT. Asynchronous generative adversarial network for asymmetric unpaired image-to-
image translation. IEEE Trans. on Multimedia, 2023, 25: 2474–2487. [doi: 10.1109/TMM.2022.3147425]
[32] Rao DY, Xu TY, Wu XJ. TGFuse: An infrared and visible image fusion approach based on Transformer and generative adversarial
network. arXiv:2201.10147, 2022.
[33] Hong FT, Zhang LH, Shen L, Xu D. Depth-aware generative adversarial network for talking head video generation. In: Proc. of the 2022
IEEE/CVF Conf. on Computer Vision and Pattern Recognition. New Orleans: IEEE, 2022. 3387–3396. [doi: 10.1109/CVPR52688.2022.
00339]
[34] Jia WQ, Liu M, Rehg JM. Generative adversarial network for future hand segmentation from egocentric video. In: Proc. of the 17th
European Conf. on Computer Vision. Tel Aviv: Springer, 2022. 639–656. [doi: 10.1007/978-3-031-19778-9_37]
[35] Zhao YZ, Po LM, Yu WY, Rehman YAU, Liu MY, Zhang YJ, Ou WF. VCGAN: Video colorization with hybrid generative adversarial
network. IEEE Trans. on Multimedia, 2023, 25: 3017–3032. [doi: 10.1109/TMM.2022.3154600]
[36] Karuna EN, Sokolov PV, Gavrilic DA. Generative adversarial approach in natural language processing. In: Proc. of the XXV Int’l Conf.
on Soft Computing and Measurements (SCM). Saint Petersburg: IEEE, 2022. 111–114. [doi: 10.1109/SCM55405.2022.9794898]
[37] Lai CT, Hong YT, Chen HY, Lu CJ, Lin SD. Multiple text style transfer by using word-level conditional generative adversarial network
with two-phase training. In: Proc. of the 2019 Conf. on Empirical Methods in Natural Language Processing and the 9th Int’l Joint Conf.
on Natural Language Processing (EMNLP-IJCNLP). Hong Kong: Association for Computational Linguistics, 2019. 3579–3584. [doi: 10.
18653/v1/D19-1366]
[38] Guarino G, Samet A, Nafi A, Cavallucci D. PaGAN: Generative adversarial network for patent understanding. In: Proc. of the 2021 IEEE
Int’l Conf. on Data Mining. Auckland: IEEE, 2021. 1084–1089. [doi: 10.1109/ICDM51629.2021.00126]
[39] Veličković P, Fedus W, Hamilton WL, Liò P, Bengio Y, Hjelm RD. Deep graph infomax. arXiv:1809.10341, 2018.
[40] You YN, Chen TL, Sui YD, Chen T, Wang ZY, Shen Y. Graph contrastive learning with augmentations. In: Proc. of the 34th Int’l Conf.
on Neural Information Processing Systems. Vancouver: Curran Associates Inc., 2020. 5812–5823.
[41] Zhu YQ, Xu YC, Yu F, Liu Q, Wu S, Wang L. Deep graph contrastive representation learning. arXiv:2006.04131, 2020.
[42] Sun FY, Hoffmann J, Verma V, Tang J. InfoGraph: Unsupervised and semi-supervised graph-level representation learning via mutual
information maximization. arXiv:1908.01000, 2020.
[43] Suresh S, Li P, Hao C, Neville J. Adversarial graph augmentation to improve graph contrastive learning. In: Proc. of the 35th Int’l Conf.
on Neural Information Processing Systems. Curran Associates Inc., 2021. 15920–15933.
[44] You YN, Chen TL, Wang ZY, Shen Y. Bringing your own view: Graph contrastive learning without prefabricated data augmentations. In:
Proc. of the 15th ACM Int’l Conf. on Web Search and Data Mining. ACM, 2022. 1300–1309. [doi: 10.1145/3488560.3498416]
[45] Xia J, Wu LR, Chen JT, Hu BZ, Li SZ. SimGRACE: A simple framework for graph contrastive learning without data augmentation. In:
Proc. of the 2022 ACM Web Conf. ACM, 2022. 1070–1079. [doi: 10.1145/3485447.3512156]
[46] Yu JL, Yin HZ, Xia X, Chen T, Cui LZ, Nguyen QVH. Are graph augmentations necessary? Simple graph contrastive learning for
recommendation. In: Proc. of the 45th Int’l ACM SIGIR Conf. on Research and Development in Information Retrieval. Madrid: ACM,
2022. 1294–1303. [doi: 10.1145/3477495.3531937]
[47] Lee N, Lee J, Park C. Augmentation-free self-supervised learning on graphs. In: Proc. of the 2022 AAAI Conf. on Artificial Intelligence.
AAAI, 2022. 7372–7380. [doi: 10.1609/aaai.v36i7.20700]
[48] Li SH, Wang X, Zhang A, Wu YX, He XN, Chua TS. Let invariant rationale discovery inspire graph contrastive learning. In: Proc. of the
39th Int’l Conf. on Machine Learning. 2022. 13052–13065.
[49] Jiang YQ, Huang C, Huang LH. Adaptive graph contrastive learning for recommendation. In: Proc. of the 29th ACM SIGKDD Conf. on
Knowledge Discovery and Data Mining. Long Beach: ACM, 2023. 4252–4261. [doi: 10.1145/3580305.3599768]
[50] Yang YH, Huang C, Xia LH, Li CL. Knowledge graph contrastive learning for recommendation. In: Proc. of the 45th Int’l ACM SIGIR
Conf. on Research and Development in Information Retrieval. Madrid: ACM, 2022. 1434–1443. [doi: 10.1145/3477495.3532009]
[51] Yuan Y, Lin L. Self-supervised pretraining of Transformers for satellite image time series classification. IEEE Journal of Selected Topics
in Applied Earth Observations and Remote Sensing, 2021, 14: 474–487. [doi: 10.1109/JSTARS.2020.3036602]
[52] Zhang X, Zhao ZY, Tsiligkaridis T, Zitnik M. Self-supervised contrastive pre-training for time series via time-frequency consistency. In:
Proc. of the 36th Int’l Conf. on Neural Information Processing Systems. New Orleans: Curran Associates Inc., 2022. 3988–4003.
[53] Deldari S, Smith DV, Xue H, Salim FD. Time series change point detection with self-supervised contrastive predictive coding. In: Proc.
of the 2021 Web Conf. Ljubljana: ACM, 2021. 3124–3135. [doi: 10.1145/3442381.3449903]
[54] Tipirneni S, Reddy CK. Self-supervised Transformer for sparse and irregularly sampled multivariate clinical time-series. ACM Trans. on

