Page 210 - 《软件学报》2024年第4期
P. 210
1788 软件学报 2024 年第 35 卷第 4 期
[38] Angel A, Koudas N, Sarkas N, et al. Dense subgraph maintenance under streaming edge weight updates for real-time story
identification. The VLDB Journal, 2014, 23: 175−199.
[39] Yu W, Li J, Bhuiyan MZA, et al. Ring: Real-time emerging anomaly monitoring system over text streams. IEEE Trans. on Big
Data, 2017, 5(4): 506−519.
[40] Allan J. Topic Detection and Tracking: Event-based Information Organization. Springer Publishing Company, Incorporated, 2002.
[41] Peng H, Zhang R, Li S, et al. Reinforced, incremental and cross-lingual event detection from social messages. IEEE Trans. on
Pattern Analysis and Machine Intelligence, 2022, 45(1): 980−998.
[42] Sun M, Zhao S, Gilvary C, et al. Graph convolutional networks for computational drug development and discovery. Briefings in
Bioinformatics, 2020, 21(3): 919−935.
[43] Rong Y, Bian Y, Xu T, et al. Self-supervised graph transformer on large-scale molecular data. In: Advances in Neural Information
Proccesing Systems, Vol.33. 2020. 12559−12571.
[44] Xiao ST, Shao YX, Li YW, et al. LECF: Recommendation via learnable edge collaborative filtering. Science China Information
Sciences, 2022, 65(1): 112101.
[45] Li Y, Yuan Y, Wang Y, et al. Distributed multimodal path queries. IEEE Trans. on Knowledge and Data Engineering, 2020, 34(7):
3196−3210.
[46] Veličković P, Cucurull G, Casanova A, et al. Graph attention networks. arXiv:1710.10903, 2017.
[47] Li H, Shao Y, Du J, et al. An I/O-efficient disk-based graph system for scalable second-order random walk of large graphs.
arXiv:2203.16123, 2022.
[48] Dai H, Li H, Tian T, et al. Adversarial attack on graph structured data. In: Proc. of the Int’l Conf. on Machine Learning. 2018.
1115−1124.
[49] Chen L, Li JT, Peng QB, et al. Understanding structural vulnerability in graph convolutional networks. In: Proc. of the IJCAI. 2021.
2249−2255.
[50] Zhu D, Zhang Z, Cui P, et al. Robust graph convolutional networks against adversarial attacks. In: Proc. of the 25th ACM SIGKDD
Int’l Conf. on Knowledge Discovery & Data Mining. 2019. 1399−1407.
[51] Li Q, Wen Z, Wu Z, et al. A survey on Federated learning systems: Vision, hype and reality for data privacy and protection. IEEE
Trans. on Knowledge and Data Engineering, 2021, 35(4): 3347−3366.
[52] Arivazhagan MG, Aggarwal V, Singh AK, et al. Federated learning with personalization layers. arXiv:1912.00818, 2019.
[53] Wang B, Li A, Pang M, et al. GraphFL: A Federated learning framework for semi-supervised node classification on graphs. In:
Proc. of the ICDM IEEE Int’l Conf. on Data Mining. 2022. 498−507.
[54] Scardapane S, Spinelli I, Di Lorenzo P. Distributed training of graph convolutional networks. IEEE Trans. on Signal and
Information Processing over Networks, 2020, 7: 87−100.
[55] Wan C, Li Y, Li A, et al. BNS-GCN: Efficient full-graph training of graph convolutional networks with partition-parallelism and
random boundary node sampling. Proc. of the Machine Learning and Systems, 2022, 4: 673−693.
[56] Yao Y, Jin W, Ravi S, et al. FedGCN: Convergence-communication tradeoffs in Federated training of graph convolutional
networks. arXiv:2201.12433, 2023.
[57] Zhang K, Yang C, Li X, et al. Subgraph Federated learning with missing neighbor generation. In: Advances in Neural Information
Proccessing Systems, Vol.34. 2021. 6671−6682.
[58] Watkins CJCH, Dayan P. Q-learning. Machine Learning, 1992, 8: 279−292.
[59] Mnih V, Kavukcuoglu K, Silver D, et al. Playing atari with deep reinforcement learning. arXiv:1312.5602, 2013.
[60] Sutton RS, McAllester D, Singh S, et al. Policy gradient methods for reinforcement learning with function approximation. In:
Advances in Neural Information Processing Systems, Vol.12. 1999.1057−1063.
[61] Silver D, Lever G, Heess N, et al. Deterministic policy gradient algorithms. In: Proc. of the 31st Int’l Conf. on Machine Learning.
2014. 387−395.
[62] Lillicrap TP, Hunt JJ, Pritzel A, et al. Continuous control with deep reinforcement learning. arXiv:1509.02971, 2019.
[63] Banner R, Hubara I, Hoffer E, et al. Scalable methods for 8-bit training of neural networks. In: Advances in Neural Information
Processing Systems, Vol.31. 2018.
[64] Li Y, Li W, Xue Z. Federated learning with stochastic quantization. Int’l Journal of Intelligent Systems, 2022, 37(12):
11600−11621.