Page 170 - 《软件学报》2024年第4期
P. 170

1748                                                       软件学报  2024 年第 35 卷第 4 期

         [12]    Rishabh A, Schuurmans D, Norouzi M. An optimistic perspective on offline reinforcement learning. In: Proc. of the 37th Int’l Conf.
              on Machine Learning. 2020. 104−114.
         [13]    Botvinick M, Ritter S, Wang J, et al. Reinforcement learning, fast and slow. Trends in Cognitive Sciences, 2019, 23(5): 408−422.
         [14]    Huang ZG,  Liu Q,  Zhang LH,  et al.  Research and  development  on deep  hierarchical reinforcement learning.  Ruan Jian  Xue
              Bao/Journal of Software, 2023, 34(2): 733−760 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6706.htm [doi:
              10.13328/j.cnki.jos.006706]
         [15]    Sun T, He Z, Qian H, et al. BBTv2: Towards a gradient-free future with large language models. In: Proc. of the 2022 Conf. on
              Empirical Methods in Natural Language Processing. 2022. 3916−3930.
         [16]    Sun T, Shao Y, Qian H,  et al. Black-box tuning for language-model-as-a-service. In: Proc.  of the 39th Int’l Conf. on Machine
              Learning. 2022. 20841−20855.
         [17]    Sanderson K. GPT-4 is here: What scientists think. Nature, 2023, 615(7954): 773.
         [18]    Liu P,  Yuan W,  Fu J,  et al. Pre-train, prompt,  and  predict:  A systematic  survey of prompting  methods  in natural language
              processing. ACM Computing Surveys, 2023, 55(9): 195,1−195,35.
         [19]    Lester  B, Al-Rfou R,  Constant  N.  The power of scale for parameter-efficient prompt  tuning. In:  Proc.  of the 2021  Conf.  on
              Empirical Methods in Natural Language Processing. 2021. 3045−3059.
         [20]    Hu  S, Ding  N, Wang H,  et al.  Knowledgeable prompt-tuning: Incorporating knowledge into  prompt verbalizer for text
              classification. In: Proc. of the 60th Annual Meeting of the Association for Computational Linguistics. 2022. 2225−2240.
         [21]    Ke G, Meng Q, Finley T, et al. LightGBM: A highly efficient gradient boosting decision tree. In: Advances in Neural Information
              Processing Systems 30. 2017. 3146−3154.
         [22]    Conn AR, Scheinberg K, Vicente L. Introduction to derivative-free optimization. Philadelphia: society for industrial and applied
              mathematics, 2009. https://epubs.siam.org/terms-privacy
         [23]    Rios L, Sahinidis N. Derivative-free optimization: A review of algorithms and comparison of software implementations. Journal of
              Global Optimization, 2013, 56(3): 1247−1293.
         [24]    Zhou ZH, Yu Y, Qian C. Evolutionary Learning: Advances in Theories and Algorithms. Springer, 2019.
         [25]    Li  S, Kirby R, Zhe  S. Batch multi-fidelity Bayesian  optimization with  deep auto-regressive networks. In:  Advances in Neural
              Information Processing Systems 34. 2021. 25463−25475.
         [26]    Poloczek M, Wang J, Frazier P. Multi-information source optimization. In: Advances in Neural Information Processing Systems 30.
              2017. 4288−4298.
         [27]    Hu YQ, Yu Y, Tu W, et al. Multi-fidelity automatic hyper-parameter tuning via transfer series expansion. In: Proc. of the 33rd
              AAAI Conf. on Artificial Intelligence. 2019. 3846−3853.
         [28]    Nemirovski A, Juditsky A, Lan G, et al. Robust stochastic approximation approach to stochastic programming. SIAM Journal on
              Optimization, 2009, 19(4): 1574−1609.
         [29]    Bottou L. Large-scale  machine learning with stochastic gradient descent.  In:  Proc.  of the 19th  Int’l  Conf.  on Computational
              Statistics. 2010. 177−186.
         [30]    James B, Bardenet R, Bengio Y,  et al. Algorithms for  hyper-parameter optimization. In:  Advances in  Neural Information
              Processing Systems 24. 2011. 2546−2554.
         [31]    Wu J, Toscano-Palmerin S, Frazier PI, et al. Practical multi-fidelity bayesian optimization for hyperparameter tuning. In: Proc. of
              the 35th Conf. on Uncertainty in Artificial Intelligence. 2019. 788−798.
         [32]    Qian C, Yu Y, Zhou  ZH. Analyzing  evolutionary  optimization  in  noisy environments. Evolutionary Computation,  2018,  26(1):
              1−41.
         [33]    Qian C, Bian C, Yu Y,  et al.  Analysis of noisy  evolutionary optimization when  sampling fails.  Algorithmica, 2021,  83(4):
              940−975.
         [34]    Kandasamy K, Dasarathy G, Schneider J, et al. Multi-fidelity Bayesian optimisation with continuous approximations. In: Proc. of
              the 34th Int’l Conf. on Machine Learning. 2017. 1799−1808.
         [35]    Falkner S, Klein A, Hutter F. BOHB: Robust and efficient hyperparameter optimization at scale. In: Proc. of the 35th Int’l Conf. on
              Machine Learning. 2018. 1436−1445.
   165   166   167   168   169   170   171   172   173   174   175