Page 196 - 《软件学报》2025年第4期
P. 196

1602                                                       软件学报  2025  年第  36  卷第  4  期


                  [7]  Yang  BS,  Yih  WT,  He  XD,  Gao  JF,  Deng  L.  Embedding  entities  and  relations  for  learning  and  inference  in  knowledge  bases.
                     arXiv:1412.6575, 2015.
                  [8]  Trouillon T, Welbl J, Riedel S, Gaussier É, Bouchard G. Complex embeddings for simple link prediction. In: Proc. of the 33rd Int’l Conf.
                     on Machine Learning. New York: JMLR.org, 2016. 2071–2080.
                  [9]  Yao L, Mao CS, Luo Y. KG-BERT: BERT for knowledge graph completion. arXiv:1909.03193, 2019.
                 [10]  Wang M, Wang S, Yang H, Zhang Z, Chen X, Qi GL. Is visual context really helpful for knowledge graph? A representation learning
                     perspective. In: Proc. of the 29th ACM Int’l Conf. on Multimedia. ACM, 2021. 2735–2743. [doi: 10.1145/3474085.3475470]
                 [11]  Zhang  NY,  Xie  X,  Chen  X,  Deng  SM,  Ye  HB,  Chen  HJ.  Knowledge  collaborative  fine-tuning  for  low-resource  knowledge  graph
                     completion. Ruan Jian Xue Bao/Journal of Software, 2022, 33(10): 3531–3545 (in Chinese with English abstract). http://www.jos.org.cn/
                     1000-9825/6628.htm [doi: 10.13328/j.cnki.jos.006628]
                 [12]  Kipf TN, Welling M. Semi-supervised classification with graph convolutional networks. arXiv:1609.02907, 2017.
                 [13]  Wang Z, Zhang JW, Feng JL, Chen Z. Knowledge graph embedding by translating on hyperplanes. In: Proc. of the 28th AAAI Conf. on
                     Artificial Intelligence. Québec City: AAAI, 2014. 1112–1119. [doi: 10.1609/aaai.v28i1.8870]
                 [14]  Dettmers T, Minervini P, Stenetorp P, Riedel S. Convolutional 2D knowledge graph embeddings. In: Proc. of the 32nd AAAI Conf. on
                     Artificial Intelligence. New Orleans: AAAI, 2018. 1811–1818. [doi: 10.1609/aaai.v32i1.11573]

                 [15]  Nguyen DQ, Nguyen TD, Nguyen DQ, Phung D. A novel embedding model for knowledge base completion based on convolutional
                     neural network. In: Proc. of the 2018 Conf. of the North American Chapter of the Association for Computational Linguistics: Human
                     Language Technologies, Vol. 2 (Short Papers). New Orleans: Association for Computational Linguistics, 2018. 327–333. [doi: 10.18653/
                     v1/N18-2053]
                 [16]  Chen  X,  Zhang  NY,  Li  L,  Deng  SM,  Tan  CQ,  Xu  CL,  Huang  F,  Si  L,  Chen  HJ.  Hybrid  Transformer  with  multi-level  fusion  for
                     multimodal knowledge graph completion. In: Proc. of the 45th Int’l ACM SIGIR Conf. on Research and Development in Information
                     Retrieval. Madrid: ACM, 2022. 904–915. [doi: 10.1145/3477495.3531992]
                 [17]  Zhang YC, Chen MY, Zhang W. Modality-aware negative sampling for multi-modal knowledge graph embedding. In: Proc. of the 2023
                     Int’l Joint Conf. on Neural Networks (IJCNN). Gold Coast: IEEE, 2023. 1–8. [doi: 10.1109/IJCNN54540.2023.10191314]
                 [18]  Liang  WX,  Jiang  YH,  Liu  ZX.  GraghVQA:  Language-guided  graph  neural  networks  for  graph-based  visual  question  answering.
                     arXiv:2104.10283, 2021.
                 [19]  Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. DialogueGCN: A graph convolutional neural network for emotion recognition in
                     conversation. In: Proc. of the 2019 Conf. on Empirical Methods in Natural Language Processing and the 9th Int’l Joint Conf. on Natural
                     Language Processing (EMNLP-IJCNLP). Hong Kong: Association for Computational Linguistics, 2019. 154–164. [doi: 10.18653/v1/D19-
                     1015]
                 [20]  Ying CX, Cai TL, Luo SJ, Zheng SX, Ke GL, He D, Shen YM, Liu TY. Do Transformers really perform bad for graph representation?
                     arXiv:2106.05234, 2021.
                 [21]  Schlichtkrull M, Kipf TN, Bloem P, van den Berg R, Titov I, Welling M. Modeling relational data with graph convolutional networks. In:
                     Proc. of the 15th Int’l Conf. on the Semantic Web. Heraklion: Springer, 2018. 593–607. [doi: 10.1007/978-3-319-93417-4_38]
                 [22]  Vashishth S, Sanyal S, Nitin V, Talukdar P. Composition-based multi-relational graph convolutional networks. arXiv:1911.03082, 2020.
                 [23]  Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I. Attention is all you need. In: Proc. of the
                     31st Int’l Conf. on Neural Information Processing Systems. Long Beach: Curran Associates Inc., 2017. 6000–6010.
                 [24]  Radford A, Kim JW, Hallacy C, Ramesh A, Goh G, Agarwal S, Sastry G, Askell A, Mishkin P, Clark J, Krueger G, Sutskever I. Learning
                     transferable visual models from natural language supervision. In: Proc. of the 38th Int’l Conf. on Machine Learning. 2021. 8748–8763.
                 [25]  Zhang D, Wei SZ, Li SS, Wu HQ, Zhu QM, Zhou GD. Multi-modal graph fusion for named entity recognition with targeted visual
                     guidance. In: Proc. of the 35th AAAI Conf. on Artificial Intelligence. AAAI, 2021. 14347–14355. [doi: 10.1609/aaai.v35i16.17687]
                 [26]  Zheng CM, Feng JH, Fu Z, Cai Y, Li Q, Wang T. Multimodal relation extraction with efficient graph alignment. In: Proc. of the 29th
                     ACM Int’l Conf. on Multimedia. ACM, 2021. 5298–5306. [doi: 10.1145/3474085.3476968]
                 [27]  Sun  H,  Wang  HY,  Liu  JQ,  Chen  YW,  Lin  LF.  CubeMLP:  An  MLP-based  model  for  multimodal  sentiment  analysis  and  depression
                     estimation. In: Proc. of the 30th ACM Int’l Conf. on Multimedia. Lisboa: ACM, 2022. 3722–3729. [doi: 10.1145/3503161.3548025]
                 [28]  Gers  FA,  Schmidhuber  J,  Cummins  F.  Learning  to  forget:  Continual  prediction  with  LSTM.  Neural  Computation,  2000,  12(10):
                     2451–2471. [doi: 10.1162/089976600300015015]
                 [29]  Chung  J,  Gulcehre  C,  Cho  K,  Bengio  Y.  Empirical  evaluation  of  gated  recurrent  neural  networks  on  sequence  modeling.  arXiv:
                     1412.3555, 2014.
                 [30]  Zhang CX, Song DJ, Huang C, Swami A, Chawla NV. Heterogeneous graph neural network. In: Proc. of the 25th ACM SIGKDD Int’l
   191   192   193   194   195   196   197   198   199   200   201