Page 221 - 《软件学报》2025年第9期
P. 221

4132                                                       软件学报  2025  年第  36  卷第  9  期


                 [30]   Zhou XH, Shen J. Survey on federated learning for medical application scenarios. Information Technology and Informatization, 2023,
                     (11): 135–141 (in Chinese). [doi: 10.3969/j.issn.1672-9528.2023.11.031]
                 [31]   Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: Smaller, faster, cheaper and lighter. arXiv:1910.01108,
                     2019.
                 [32]   Jiao XQ, Yin YC, Shang LF, Jiang X, Chen X, Li LL, Wang F, Liu Q. TinyBERT: Distilling BERT for natural language understanding.
                     In:  Proc.  of  the  2020  Findings  of  the  Association  for  Computational  Linguistics.  Association  for  Computational  Linguistics,  2020.
                     4163–4174. [doi: 10.18653/v1/2020.findings-emnlp.372]
                 [33]   Lit Z, Sit S, Wang JZ, Xiao J. Federated split BERT for heterogeneous text classification. In: Proc. of the 2022 Int’l Joint Conf. on Neural
                     Networks (IJCNN). Padua: IEEE, 2022. 1–8. [doi: 10.1109/IJCNN55064.2022.9892845]
                 [34]   Tian YYS, Wan Y, Lyu LJ, Yao DZ, Jin H, Sun LC. FEDBERT: When federated learning meets pre-training. ACM Trans. on Intelligent
                     Systems and Technology, 2022, 13(4): 66. [doi: 10.1145/3510033]
                 [35]   Hao YR, Dong L, Wei FR, Xu K. Visualizing and understanding the effectiveness of BERT. In: Proc. of the 2019 Conf. on Empirical
                     Methods in Natural Language Processing and the 9th Int’l Joint Conf. on Natural Language Processing (EMNLP-IJCNLP). Hong Kong:
                     Association for Computational Linguistics, 2019. 4143–4152. [doi: 10.18653/v1/D19-1424]
                 [36]   Jawahar G, Sagot B, Seddah D. What does BERT learn about the structure of language? In: Proc. of the 57th Annual Meeting of the
                     Association for Computational Linguistics. Florence: Association for Computational Linguistics, 2019. 3651–3657. [doi: 10.18653/v1/
                     P19-1356]
                 [37]   Manginas  N,  Chalkidis  I,  Malakasiotis  P.  Layer-wise  guided  training  for  BERT:  Learning  incrementally  refined  document
                     representations. In: Proc. of the 4th Workshop on Structured Prediction for NLP. Association for Computational Linguistics, 2020. 53–61.
                     [doi: 10.18653/v1/2020.spnlp-1.7]
                 [38]   Wang J, Chen K, Chen G, Shou LD, McAuley J. SkipBERT: Efficient inference with shallow layer skipping. In: Proc. of the 60th Annual
                     Meeting of the Association for Computational Linguistics (Vol. 1: Long Papers). Dublin: Association for Computational Linguistics,
                     2022. 7287–7301. [doi: 10.18653/v1/2022.acl-long.503]
                 [39]   Fayek HM, Cavedon L, Wu HR. Progressive learning: A deep learning framework for continual learning. Neural Networks, 2020, 128:
                     345–357. [doi: 10.1016/j.neunet.2020.05.011]
                 [40]   Lo K, Wang LL, Neumann M, Kinney R, Weld D. S2ORC: The semantic scholar open research corpus. In: Proc. of the 58th Annual
                     Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2020. 4969–4983. [doi: 10.18653/
                     v1/2020.acl-main.447]
                 [41]   Liang Z, Wang HZ, Dai JJ, Shao XY, Ding XO, Mu TY. Interpretability of entity matching based on pre-trained language model. Ruan
                     Jian Xue Bao/Journal of Software, 2023, 34(3): 1087–1108 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6794.
                     htm [doi: 10.13328/j.cnki.jos.006794]
                 [42]   Sheng XC, Chen DW. Research on text classification model based on federated learning and differential privacy. Journal of Information
                     Security Research, 2023, 9(12): 1145–1151 (in Chinese with English abstract). [doi: 10.12379/j.issn.2096-1057.2023.12.02]
                 [43]   Li BH, Xiang YX, Feng D, He ZC, Wu JJ, Dai TL, Li J. Short text classification model combining knowledge aware and dual attention.
                     Journal of Software, 2022, 33(10): 3565–3581 (in Chinese with English abstract). [doi: 10.13328/j.cnki.jos.006630]
                 [44]   Zhao JS, Song MX, Gao X, Zhu QM. Research on text representation in natural language processing. Ruan Jian Xue Bao/Journal of
                     Software, 2022, 33(1): 102–128 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6304.htm [doi: 10.13328/j.cnki.jos.
                     006304]
                 [45]   Collier N, Ohta T, Tsuruoka Y, Tateisi Y, Kim JD. Introduction to the bio-entity recognition task at JNLPBA. In: Proc. of the 2004 Int’l
                     Joint  Workshop  on  Natural  Language  Processing  in  Biomedicine  and  Its  Applications  (NLPBA/BioNLP).  Geneva:  COLING,  2004.
                     73–78.
                 [46]   Luan Y, He LH, Ostendorf M, Hajishirzi H. Multi-task identification of entities, relations, and coreference for scientific knowledge graph
                     construction. In: Proc. of the 2018 Conf. on Empirical Methods in Natural Language Processing. Brussels: Association for Computational
                     Linguistics, 2018. 3219–3232. [doi: 10.18653/v1/D18-1360]
                 [47]   Li J, Sun YP, Johnson RJ, Sciaky D, Wei CH, Leaman R, Davis AP, Mattingly CJ, Wiegers TC, Lu ZY. BioCreative V CDR task corpus:
                     A resource for chemical disease relation extraction. Database, 2016, 2016: baw068. [doi: 10.1093/database/baw068]
                 [48]   Doğan RI, Leaman R, Lu ZY. NCBI disease corpus: A resource for disease name recognition and concept normalization. Journal of
                     Biomedical Informatics, 2014, 47: 1–10. [doi: 10.1016/j.jbi.2013.12.006]
                 [49]   Kringelum J, Kjaerulff SK, Brunak S, Lund O, Oprea TI, Taboureau O. ChemProt-3.0: A global chemical biology diseases mapping.
                     Database, 2016, 2016: bav123. [doi: 10.1093/database/bav123]
   216   217   218   219   220   221   222   223   224   225   226