Page 178 - 《软件学报》2025年第9期
P. 178

李文艺 等: 增量构造式随机循环神经网络                                                            4089


                  [6]   Horn J, De Jesus O, Hagan MT. Spurious valleys in the error surface of recurrent networks—Analysis and avoidance. IEEE Trans. on
                     Neural Networks, 2009, 20(4): 686–700. [doi: 10.1109/TNN.2008.2012257]
                  [7]   Werbos PJ. Backpropagation through time: What it does and how to do it. Proc. of the IEEE, 1990, 78(10): 1550–1560. [doi: 10.1109/5.
                     58337]
                  [8]   Williams RJ, Zipser D. A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1989, 1(2):
                     270–280. [doi: 10.1162/neco.1989.1.2.270]
                  [9]   Puskorius GV, Feldkamp LA. Neurocontrol of nonlinear dynamical systems with Kalman filter trained recurrent networks. IEEE Trans.
                     on Neural Networks, 1994, 5(2): 279–297. [doi: 10.1109/72.279191]
                 [10]   Jaeger H. Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the “echo state network” approach. Technical
                     Report, 159, German National Research Center for Information Technology, 2002. https://www.ai.rug.nl/minds/uploads/ESNTutorialRev.
                     pdf
                 [11]   Liu JW, Song ZY. Overview of recurrent neural networks. Control and Decision, 2022, 37(11): 2753–2768 (in Chinese with English
                     abstract). [doi: 10.13195/j.kzyjc.2021.1241]
                 [12]   Ku  CC,  Lee  KY.  Diagonal  recurrent  neural  networks  for  dynamic  systems  control.  IEEE  Trans.  on  Neural  Networks,  1995,  6(1):
                     144–156. [doi: 10.1109/72.363441]
                 [13]   Hochreiter S, Schmidhuber J. Long short-term memory. Neural Computation, 1997, 9(8): 1735–1780. [doi: 10.1162/neco.1997.9.8.1735]
                 [14]   Cho K, van Merriënboer B, Gulcehre G, Bahdanau D, Bougares F, Schwenk H, Bengio Y. Learning phrase representations using RNN
                     encoder-decoder for statistical machine translation. In: Proc. of the 2014 Conf. on Empirical Methods in Natural Language Processing
                     (EMNLP). Doha: Association for Computational Linguistics, 2014. 1724–1734. [doi: 10.3115/v1/d14-1179]
                 [15]   Liu J, Shahroudy A, Xu D, Wang G. Spatio-temporal LSTM with trust gates for 3D human action recognition. In: Proc. of the 14th
                     European Conf. on Computer Vision (ECCV). Amsterdam: Springer, 2016. 816–833. [doi: 10.1007/978-3-319-46487-9_50]
                 [16]   Graves A, Schmidhuber J. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural
                     Networks, 2005, 18(5–6): 602–610. [doi: 10.1016/j.neunet.2005.06.042]
                 [17]   Dey R, Salem FM. Gate-variants of gated recurrent unit (GRU) neural networks. In: Proc. of the 60th IEEE Int’l Midwest Symp. on
                     Circuits and Systems (MWSCAS). Boston: IEEE, 2017. 1597–1600. [doi: 10.1109/MWSCAS.2017.8053243]
                 [18]   Scardapane S, Wang DH. Randomness in neural networks: An overview. Wiley Interdisciplinary Reviews: Data Mining and Knowledge
                     Discovery, 2017, 7(2): e1200. [doi: 10.1002/widm.1200]
                 [19]   Cao WP, Wang XZ, Ming Z, Gao JZ. A review on neural networks with random weights. Neurocomputing, 2018, 275: 278–287. [doi: 10.
                     1016/j.neucom.2017.08.040]
                 [20]   Pao YH, Takefuji Y. Functional-link net computing: Theory, system architecture, and functionalities. Computer, 1992, 25(5): 76–79. [doi:
                     10.1109/2.144401]
                 [21]   Zhang QS, Tsang ECC, He Q, Guo YT. Ensemble of kernel extreme learning machine based elimination optimization for multi-label
                     classification. Knowledge-based Systems, 2023, 278: 110817. [doi: 10.1016/j.knosys.2023.110817]
                 [22]   Gong XR, Zhang T, Chen CLP, Liu ZL. Research review for broad learning system: Algorithms, theory, and applications. IEEE Trans. on
                     Cybernetics, 2022, 52(9): 8922–8950. [doi: 10.1109/TCYB.2021.3061094]
                 [23]   Wang  DH,  Li  M.  Stochastic  configuration  networks:  Fundamentals  and  algorithms.  IEEE  Trans.  on  Cybernetics,  2017,  47(10):
                     3466–3479. [doi: 10.1109/TCYB.2017.2734043]
                 [24]   Dai  W,  Li  DP,  Zhou  P,  Chai  TY.  Stochastic  configuration  networks  with  block  increments  for  data  modeling  in  process  industries.
                     Information Sciences, 2019, 484: 367–386. [doi: 10.1016/j.ins.2019.01.062]
                 [25]   Dai  W,  Li  DP,  Yang  CY,  Ma  XP.  A  model  and  data  hybrid  parallel  learning  method  for  stochastic  configuration  networks.  Acta
                     Automatica Sinica, 2021, 47(10): 2427–2437 (in Chinese with English abstract). [doi: 10.16383/j.aas.c190411]
                 [26]   Zhang CL, Ding SF, Guo LL, Zhang J. Research progress on stochastic configuration network. Ruan Jian Xue Bao/Journal of Software,
                     2024, 35(5): 2379–2399 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6804.htm [doi: 10.13328/j.cnki.jos.006804]
                 [27]   Jaeger H, Lukoševičius M, Popovici D, Siewert U. Optimization and applications of echo state networks with leaky-integrator neurons.
                     Neural Networks, 2007, 20(3): 335–352. [doi: 10.1016/j.neunet.2007.04.016]
                 [28]   Yildiz IB, Jaeger H, Kiebel SJ. Re-visiting the echo state property. Neural Networks, 2012, 35: 1–9. [doi: 10.1016/j.neunet.2012.07.005]
                 [29]   Wang HS, Yan XF. Optimizing the echo state network with a binary particle swarm optimization algorithm. Knowledge-based Systems,
                     2015, 86: 182–193. [doi: 10.1016/j.knosys.2015.06.003]
                 [30]   Dutoit X, Schrauwen B, van Campenhout J, Stroobandt D, van Brussel H, Nuttin M. Pruning and regularization in reservoir computing.
                     Neurocomputing, 2009, 72(7–9): 1534–1546. [doi: 10.1016/j.neucom.2008.12.020]
   173   174   175   176   177   178   179   180   181   182   183