Page 199 - 《软件学报》2026年第1期
P. 199
196 软件学报 2026 年第 37 卷第 1 期
Laplace matrix estimation. IEEE Trans. on Intelligent Transportation Systems, 2022, 23(2): 1009–1018. [doi: 10.1109/TITS.2020.
3019497]
[9] Ji SX, Pan SR, Cambria E, Marttinen P, Yu PS. A survey on knowledge graphs: Representation, acquisition, and applications. IEEE
Trans. on neural networks and learning systems, 2022, 33(2): 494–514. [doi: 10.1109/TNNLS.2021.3070843]
[10] Yasunaga M, Bosselut A, Ren HY, Zhang XK, Manning CD, Liang P, Leskovec J. Deep bidirectional language-knowledge graph
pretraining. In: Proc. of the 36th Int’l Conf. on Neural Information Processing Systems. New Orleans: Curran Associates Inc., 2022.
37309–37323.
[11] Chandak P, Huang KX, Zitnik M. Building a knowledge graph to enable precision medicine. Scientific Data, 2023, 10(1): 67. [doi: 10.
1038/s41597-023-01960-3]
[12] Santos A, Colaço AR, Nielsen AB, Niu LL, Strauss M, Geyer PE, Coscia F, Albrechtsen NJW, Mundt F, Jensen LJ, Mann M. A
knowledge graph to interpret clinical proteomics data. Nature Biotechnology, 2022, 40(5): 692–702. [doi: 10.1038/s41587-021-01145-6]
[13] Li MM, Huang KX, Zitnik M. Graph representation learning in biomedicine and healthcare. Nature Biomedical Engineering, 2022, 6(12):
1353–1369. [doi: 10.1038/s41551-022-00942-x]
[14] Ding KZ, Xu Z, Tong HH, Liu H. Data augmentation for deep graph learning: A survey. ACM SIGKDD Explorations Newsletter, 2022,
24(2): 61–77. [doi: 10.1145/3575637.3575646]
[15] Wu LF, Cui P, Pei J, Zhao L, Guo XJ. Graph neural networks: Foundation, frontiers and applications. In: Proc. of the 28th ACM
SIGKDD Conf. on Knowledge Discovery and Data Mining. Washington: ACM, 2022. 4840–4841. [doi: 10.1145/3534678.3542609]
[16] Ahmed M, Seraj R, Islam SMS. The k-means algorithm: A comprehensive survey and performance evaluation. Electronics, 2020, 9(8):
1295. [doi: 10.3390/electronics9081295]
[17] Sinaga KP, Yang MS. Unsupervised k-means clustering algorithm. IEEE Access, 2020, 8: 80716–80727. [doi: 10.1109/ACCESS.2020.
2988796]
[18] Samek W, Montavon G, Lapuschkin S, Anders CJ, Müller KR. Explaining deep neural networks and beyond: A review of methods and
applications. Proc. of the IEEE, 2021, 109(3): 247–278. [doi: 10.1109/JPROC.2021.3060483]
[19] Liou CY, Cheng WC, Liou JW, Liou DR. Autoencoder for words. Neurocomputing, 2014, 139: 84–96. [doi: 10.1016/j.neucom.2013.09.
055]
[20] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of deep bidirectional Transformers for language understanding. In: Proc.
of the 2019 Conf. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol. 1
(Long and Short Papers). Minneapolis: Association for Computational Linguistics, 2019. 4171–4186. [doi: 10.18653/v1/N19-1423]
[21] Brown T, Mann B, Ryder N, et al. In: Proc. of the 34th Int’l Conf. on Neural Information Processing Systems. Vancouver: Curran
Associates Inc., 2020. 1877–1901.
[22] Goldblum M, Souri H, Ni RK, Shu ML, Prabhu V, Somepalli G, Chattopadhyay P, Ibrahim M, Bardes A, Hoffman J, Chellappa R,
Wilson AG, Goldstein T. Battle of the backbones: A large-scale comparison of pretrained models across computer vision tasks. In: Proc.
of the 37th Int’l Conf. on Neural Information Processing Systems. New Orleans: Curran Associates Inc., 2023. 29343–29371.
[23] Bai YT, Geng XY, Mangalam K, Bar A, Yuille AL, Darrell T, Malik J, Efros AA. Sequential modeling enables scalable learning for large
vision models. In: Proc. of the 2024 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Seattle: IEEE, 2024. 22861–22872.
[doi: 10.1109/CVPR52733.2024.02157]
[24] Cao YX, Xu JR, Yang C, Wang JA, Zhang YC, Wang CP, Chen L, Yang Y. When to pre-train graph neural networks? From data
generation perspective! In: Proc. of the 29th ACM SIGKDD Conf. on Knowledge Discovery and Data Mining. Long Beach: ACM, 2023.
142–153. [doi: 10.1145/3580305.3599548]
[25] Yin J, Li CZ, Yan H, Lian JX, Wang SZ. Train once and explain everywhere: Pre-training interpretable graph neural networks. In: Proc.
of the 37th Int’l Conf. on Neural Information Processing Systems. New Orleans: Curran Associates Inc., 2023. 35277–35299.
[26] Islam A, Chen CF, Panda R, Karlinsky L, Feris R, Radke RJ. Dynamic distillation network for cross-domain few-shot recognition with
unlabeled data. In: Proc. of the 35th Int’l Conf. on Neural Information Processing Systems. Curran Associates Inc., 2021. 3584–3595.
[27] Arjannikov T, Tzanetakis G. Cold-start hospital length of stay prediction using positive-unlabeled learning. In: Proc. of the 2021 IEEE
EMBS Int’l Conf. on Biomedical and Health Informatics (BHI). Athens: IEEE, 2021. 1–4. [doi: 10.1109/BHI50953.2021.9508596]
[28] Liu Y, Zhu L, Pei SD, Fu HZ, Qin J, Zhang Q, Wan L, Feng W. From synthetic to real: Image dehazing collaborating with unlabeled real
data. In: Proc. of the 29th ACM Int’l Conf. on Multimedia. ACM, 2021. 50–58. [doi: 10.1145/3474085.3475331]
[29] Ahmad W, Ali H, Shah Z, Azmat S. A new generative adversarial network for medical images super resolution. Scientific Reports, 2022,
12(1): 9533. [doi: 10.1038/s41598-022-13658-4]
[30] Zhu JC, Gao LL, Song JK, Li YF, Zheng F, Li XL, Shen HT. Label-guided generative adversarial network for realistic image synthesis.

