Page 291 - 《软件学报》2025年第12期
P. 291

5672                                                      软件学报  2025  年第  36  卷第  12  期


                     [doi: 10.1186/s12859-017-1999-8]
                 [46]   Chen  J,  Althagafi  A,  Hoehndorf  R.  Predicting  candidate  genes  from  phenotypes,  functions  and  anatomical  site  of  expression.
                     Bioinformatics, 2021, 37(6): 853–860. [doi: 10.1093/bioinformatics/btaa879]
                 [47]   Shiraishi  Y,  Kaneiwa  K.  A  self-matching  training  method  with  annotation  embedding  models  for  ontology  subsumption  prediction.
                     arXiv:2402.16278, 2024.
                 [48]   He Y, Chen JY, Jimenez-Ruiz E, Dong H, Horrocks I. Language model analysis for ontology subsumption inference. In: Proc. of the
                     2023 Association for Computational Linguistics. Toronto: Association for Computational Linguistics, 2023. 3439–3453. [doi: 10.18653/
                     v1/2023.findings-acl.213]
                 [49]   Shi JC, Chen JY, Dong H, Khan I, Liang L, Zhou QZ, Wu Z, Horrocks I. Subsumption prediction for E-commerce taxonomies. In: Proc.
                     of the 20th Int’l Conf. on the Semantic Web. Hersonissos: Springer, 2023. 244–261. [doi: 10.1007/978-3-031-33455-9_15]
                 [50]   Hao JH, Chen MH, Yu WC, Sun YZ, Wang W. Universal representation learning of knowledge bases by jointly embedding instances and
                     ontological concepts. In: Proc. of the 25th ACM SIGKDD Int’l Conf. on Knowledge Discovery & Data Mining. Anchorage: ACM, 2019.
                     1709–1719. [doi: 10.1145/3292500.3330838]
                 [51]   Iyer RG, Bai YS, Wang W, Sun YZ. Dual-geometric space embedding model for two-view knowledge graphs. In: Proc. of the 28th ACM
                     SIGKDD Conf. on Knowledge Discovery and Data Mining. Washington: ACM, 2022. 676–686. [doi: 10.1145/3534678.3539350]
                 [52]   Huang ZJ, Wang DH, Huang BX, Zhang CW, Shang JB, Liang Y, Wang ZY, Li X, Faloutsos C, Sun YZ, Wang W. Concept2Box: Joint
                     geometric  embeddings  for  learning  two-view  knowledge  graphs.  In:  Proc.  of  the  2023  Association  for  Computational  Linguistics.
                     Toronto: Association for Computational Linguistics, 2023. 10105–10118. [doi: 10.18653/v1/2023.findings-acl.642]
                 [53]   Bing R, Yuan G, Meng FR, Wang SZ, Qiao SJ, Wang ZX. Multi-view contrastive enhanced heterogeneous graph structure learning. Ruan
                     Ruan Jian Xue Bao/Journal of Software, 2023, 34(10): 4477–4500 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/
                     6883.htm [doi: 10.13328/j.cnki.jos.006883]
                 [54]   Kulmanov M, Wang LW, Yuan Y, Hoehndorf R. EL embeddings: Geometric construction of models for the description logic  EL++. In:
                     Proc. of the 28th Int’l Joint Conf. on Artificial Intelligence. Macao: IJCAI, 2019. 6103–6109. [doi: 10.24963/ijcai.2019/845]
                 [55]   Garg D, Ikbal S, Srivastava SK, Vishwakarma H, Karanam H, Subramaniam LV. Quantum embedding of knowledge for reasoning. In:
                     Proc. of the 33rd Int’l Conf. on Neural Int’l Processing Systems. Vancouver: Curran Associates Inc., 2019. 502. [doi: 10.5555/3454287.
                     3454789]
                 [56]   Smaili FZ, Gao X, Hoehndorf R. Onto2Vec: Joint vector-based representation of biological entities and their ontology-based annotations.
                     Bioinformatics, 2018, 34(13): i52–i60. [doi: 10.1093/bioinformatics/bty259]
                 [57]   Smaili FZ, Gao X, Hoehndorf R. OPA2Vec: Combining formal and informal content of biomedical ontologies to improve similarity-
                     based prediction. Bioinformatics, 2019, 35(12): 2133–2140. [doi: 10.1093/bioinformatics/bty933]
                 [58]   Soylu A, Kharlamov E, Zheleznyakov D, Jimenez-Ruiz E, Giese M, Skjæveland MG, Hovland D, Schlatte R, Brandt S, Lie H, Horrocks
                     I. OptiqueVQS: A visual query system over ontologies for industry. Semantic Web, 2018, 9(5): 627–660. [doi: 10.3233/SW-180293]
                 [59]   Holter OM, Myklebust EB, Chen JY, Jimenez-Ruiz E. Embedding OWL ontologies with OWL2Vec. In: Proc. of the 18th Int’l Semantic
                     Web Conf. (ISWC 2019). Auckland, 2019. 33–36.
                 [60]   Devlin J, Chang MW, Lee K, Toutanova K. Bert: Pre-training of deep bidirectional Transformers for language understanding. In: Proc. of
                     the 2019 Conf. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol. 1
                     (Long and Short Papers). Minneapolis: Association for Computational Linguistics, 2019. 4171–4186. [doi: 10.18653/v1/N19-1423]
                 [61]   Crawshaw M, Košecká J. SLAW: Scaled loss approximate weighting for efficient multi-task learning. arXiv:2109.08218, 2021.
                 [62]   Yang JX, Xiang FY, Li R, Zhang LY, Yang XX, Jiang SX, Zhang HY, Wang D, Liu XL. Intelligent bridge management via big data
                     knowledge engineering. Automation in Construction, 2022, 135: 104118. [doi: 10.1016/j.autcon.2021.104118]
                 [63]   Li  R,  Mo  TJ,  Yang  JX,  Jiang  SX,  Li  T,  Liu  YM.  Ontologies-based  domain  knowledge  modeling  and  heterogeneous  sensor  data
                     integration for bridge health monitoring systems. IEEE Trans. on Industrial Informatics, 2021, 17(1): 321–332. [doi: 10.1109/TII.2020.
                     2967561]
                 [64]   Ristoski P, Paulheim H. RDF2VEC: RDF graph embeddings for data mining. In: Proc. of the 15th Int’l Semantic Web Conf. on the
                     Semantic Web (ISWC 2016). Kobe: Springer, 2016. 498–514. [doi: 10.1007/978-3-319-46523-4_30]
                 [65]   Lee  J,  Yoon  W,  Kim  S,  Kim  D,  Kim  S,  So  CH,  Kang  J.  BioBERT:  A  pre-trained  biomedical  language  representation  model  for
                     biomedical text mining. Bioinformatics, 2020, 36(4): 1234–1240. [doi: 10.1093/bioinformatics/btz682]
                 [66]   Liu YH, Ott M, Goyal N, Du JF, Joshi M, Chen DQ, Levy O, Lewis M, Zettlemoyer L, Stoyanov V. RoBERTa: A robustly optimized
                     bert pretraining approach. arXiv:1907.11692, 2019.
                 [67]   Touvron H, Martin L, Stone K, et al. LLaMA 2: Open foundation and fine-tuned chat models. arXiv:2307.09288, 2023.
   286   287   288   289   290   291   292   293   294   295   296