Page 73 - 《软件学报》2025年第12期
P. 73

5454                                                      软件学报  2025  年第  36  卷第  12  期


                 [16]   Zhang SY, Zhai J, Bu L, Chen MS, Wang LZ, Li XD. Automated generation of LTL specifications for smart home IoT using natural
                     language. In: Proc. of the 2020 Design, Automation & Test in Europe Conf. & Exhibition (DATE). Grenoble: IEEE, 2020. 622–625. [doi:
                     10.23919/DATE48585.2020.9116374]
                 [17]   Pandita R, Taneja K, Williams L, Tung T. ICON: Inferring temporal constraints from natural language API descriptions. In: Proc. of the
                     2016 IEEE Int’l Conf. on Software Maintenance and Evolution (ICSME). Raleigh: IEEE, 2016. 378–388. [doi: 10.1109/ICSME.2016.59]
                 [18]   He J, Bartocci E, Ničković D, Isakovic H, Grosu R. DeepSTL: From English requirements to signal temporal logic. In: Proc. of the 44th
                     Int’l Conf. on Software Engineering. Pittsburgh: ACM, 2022. 610–622. [doi: 10.1145/3510003.3510171]
                 [19]   Zhong H, Zhang L, Xie T, Mei H. Inferring resource specifications from natural language API documentation. In: Proc. of the 2009
                     IEEE/ACM Int’l Conf. on Automated Software Engineering. Auckland: IEEE, 2009. 307–318. [doi: 10.1109/ASE.2009.94]
                 [20]   Fuggitti F, Chakraborti T. NL2LTL—A Python package for converting natural language (NL) instructions to linear temporal logic (LTL)
                     formulas. In: Proc. of the 37th AAAI Conf. on Artificial Intelligence. Washington: AAAI Press, 2023. 16428–16430. [doi: 10.1609/AAAI.
                     V37I13.27068]
                 [21]   Wang  XB,  Liu  DM,  Zhao  L,  Xue  YN.  Runtime  verification  monitor  construction  for  three-valued  PPTL.  In:  Proc.  of  the  6th  Int’l
                     Workshop  on  Structured  Object-oriented  Formal  Language  and  Method.  Tokyo:  Springer,  2016.  144–159.  [doi:  10.1007/978-3-319-
                     57708-1_9]
                 [22]   Wu Q. Research on temporal logic semantic analysis of natural language requirements [MS. Thesis]. Xi’an: Xidian University, 2024 (in
                     Chinese with English abstract).
                 [23]   Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of deep bidirectional Transformers for language understanding. In: Proc.
                     of the 2019 Conf. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
                     Minneapolis: ACL, 2019. 4171–4186. [doi: 10.18653/V1/N19-1423]
                 [24]   Radford A, Narasimhan K, Salimans T, Sutskever I. Improving language understanding by generative pre-training. 2018. https://cdn.
                     openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf
                 [25]   Yang  ZL,  Dai  ZH,  Yang  YM,  Carbonell  J,  Salakhutdinov  R,  Le  QV.  XLNet:  Generalized  autoregressive  pretraining  for  language
                     understanding. arXiv:1906.08237, 2019.
                 [26]   Blasi  A,  Gorla  A,  Ernst  MD,  Pezzè  M.  Call  me  maybe:  Using  NLP  to  automatically  generate  unit  test  cases  respecting  temporal
                     constraints. In: Proc. of the 37th IEEE/ACM Int’l Conf. on Automated Software Engineering. Rochester: ACM, 2022. 19. [doi: 10.1145/
                     3551349.3556961]
                 [27]   Pandita R, Xiao XS, Yang W, Enck W, Xie T. WHYPER: Towards automating risk assessment of mobile applications. In: Proc. of the
                     22nd USENIX Conf. on Security. Washington: USENIX Association, 2013. 527–542.
                 [28]   Chen YB, Liu Z, Xu HJ, Darrell T, Wang XL. Meta-baseline: Exploring simple meta-learning for few-shot learning. In: Proc. of the 2021
                     IEEE/CVF Int’l Conf. on Computer Vision. Montreal: IEEE, 2021. 9042–9051. [doi: 10.1109/ICCV48922.2021.00893]
                 [29]   Bushara AR, Kumar RSV, Kumar SS. Classification of benign and malignancy in lung cancer using capsule networks with dynamic
                     routing algorithm on computed tomography images. Journal of Artificial Intelligence and Technology, 2024, 4(1): 40–48. [doi: 10.37965/
                     jait.2023.0218]
                 [30]   Socher R, Chen DQ, Manning CD, Ng AY. Reasoning with neural tensor networks for knowledge base completion. In: Proc. of the 27th
                     Int’l Conf. on Neural Information Processing Systems. Lake Tahoe: Curran Associates Inc., 2013. 926–934.
                 [31]   Ahn  J,  Chang  K,  Choi  KM,  Kim  T,  Park  H.  DTOC-P:  Deep-learning-driven  timing  optimization  using  commercial  EDA  tool  with
                     practicality enhancement. IEEE Trans. on Computer-aided Design of Integrated Circuits and Systems, 2024, 43(8): 2493–2506. [doi: 10.
                     1109/TCAD.2024.3370110]
                 [32]   Wei J, Zou K. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. In: Proc. of the 2019 Conf.
                     on Empirical Methods in Natural Language Processing and the 9th Int’l Joint Conf. on Natural Language Processing (EMNLP-IJCNLP).
                     Hong Kong: ACL, 2019. 6382–6388. [doi: 10.18653/V1/D19-1670]
                 [33]   He LX, Wang Z, Yang SS, Liu T, Huang YM. Generalizing projected gradient descent for deep-learning-aided massive MIMO detection.
                     IEEE Trans. on Wireless Communications, 2024, 23(3): 1827–1839. [doi: 10.1109/TWC.2023.3292124]
                 [34]   Joulin A, Grave E, Bojanowski P, Mikolov T. Bag of tricks for efficient text classification. In: Proc. of the 15th Conf. on the European
                     Chapter of the Association for Computational Linguistics. Valencia: ACL, 2017. 427–431. [doi: 10.18653/V1/E17-2068]
                 [35]   Zhang Y, Wallace B. A sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification. In:
                     Proc.  of  the  8th  Int’l  Joint  Conf.  on  Natural  Language  Processing.  Taipei:  Asian  Federation  of  Natural  Language  Processing,  2017.
                     253–263. [doi: 10.18653/v1/I17-1026]
                 [36]   Liu PF, Qiu XP, Huang XJ. Recurrent neural network for text classification with multi-task learning. arXiv:1605.05101, 2016.
   68   69   70   71   72   73   74   75   76   77   78