Page 197 - 《软件学报》2025年第9期
P. 197

4108                                                       软件学报  2025  年第  36  卷第  9  期


                 [30]   Haghighatlari M, Li J, Guan XY, Zhang OF, Das A, Stein CJ, Heidar-Zadeh F, Liu ML, Head-Gordon M, Bertels L, Hao HX, Leven I,
                     Head-Gordon  T.  NewtonNet:  A  Newtonian  message  passing  network  for  deep  learning  of  interatomic  potentials  and  forces.  Digital
                     Discovery, 2022, 1(3): 333–343. [doi: 10.1039/d2dd00008c]
                 [31]   Batzner S, Musaelian A, Sun LX, Geiger M, Mailoa JP, Kornbluth M, Molinari N, Smidt TE, Kozinsky B. E(3)-equivariant graph neural
                     networks for data-efficient and accurate interatomic potentials. Nature Communications, 2022, 13(1): 2453. [doi: 10.1038/s41467-022-
                     29939-5]
                 [32]   Gasteiger J, Giri S, Margraf JT, Günnemann S. Fast and uncertainty-aware directional message passing for non-equilibrium molecules. In:
                     Proc. of the 2020 Machine Learning for Molecules Workshop. NeurIPS, 2020. 1–6.
                 [33]   Gasteiger J, Groß J, Günnemann S. Directional message passing for molecular graphs. In: Proc. of the 2020 Int’l Conf. on Learning
                     Representations (ICLR). OpenReview.net, 2020.
                 [34]   Unke OT, Chmiela S, Gastegger M, Schütt KT, Sauceda HE, Müller KR. SpookyNet: Learning force fields with electronic degrees of
                     freedom and nonlocal effects. Nature Communications, 2021, 12(1): 7273. [doi: 10.1038/s41467-021-27504-0]
                 [35]   Hu SY, Zhang WT, Sha QC, Pan F, Wang LW, Jia WL, Tan GM, Zhao T. RLEKF: An optimizer for deep potential with ab initio
                     accuracy. In: Proc. of the 2023 AAAI Conf. on Artificial Intelligence. Washington DC: AAAI, 2023. 7910–7918. [doi: 10.1609/aaai.
                     v37i7.25957]
                 [36]   Huang  YF,  Kang  J,  Goddard  III  WA,  Wang  LW.  Density  functional  theory  based  neural  network  force  fields  from  energy
                     decompositions. Physical Review B, 2019, 99(6): 064103. [doi: 10.1103/PhysRevB.99.064103]
                 [37]   Novikov IS, Gubaev K, Podryabinkin EV, Shapeev AV. The MLIP package: Moment tensor potentials with MPI and active learning.
                     Machine Learning: Science and Technology, 2021, 2(2): 025002. [doi: 10.1088/2632-2153/abc9fe]
                 [38]   Drautz R. Atomic cluster expansion for accurate and transferable interatomic potentials. Physical Review B, 2019, 99(1): 014104. [doi: 10.
                     1103/PhysRevB.99.014104]
                 [39]   Lysogorskiy Y, van der Oord C, Bochkarev A, Menon S, Rinaldi M, Hammerschmidt T, Mrovec M, Thompson A, Csányi G, Ortner C,
                     Drautz R. Performant implementation of the atomic cluster expansion (PACE) and application to copper and silicon. npj Computational
                     Materials, 2021, 7(1): 97. [doi: 10.1038/s41524-021-00559-9]
                 [40]   Bartók AP, Kondor R, Csányi G. On representing chemical environments. Physical Review B, 2013, 87(18): 184115. [doi: 10.1103/
                     physrevb.87.184115]
                 [41]   Kingma DP, Ba LJ. Adam: A method for stochastic optimization. In: Proc. of the 2015 Int’l Conf. on Learning Representations. Ithaca,
                     2015.
                 [42]   Duchi J, Hazan E, Singer Y. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine
                     Learning Research, 2011, 12: 2121–2159.
                 [43]   McMahan HB, Streeter M. Adaptive bound optimization for online convex optimization. In: Proc. of the 23rd Annual Conf. on Learning
                     Theory. 2010. 244–256.
                 [44]   Kurbiel T, Khaleghian S. Training of deep neural networks based on distance measures using RMSProp. arXiv:1708.01911, 2017.
                 [45]   Zeiler MD. ADADELTA: An adaptive learning rate method. arXiv:1212.5701, 2012.
                 [46]   Martens  J,  Grosse  R.  Optimizing  neural  networks  with  Kronecker-factored  approximate  curvature.  In:  Proc.  of  the  32nd  Int’l
                     Conf. on Int’l Conf. on Machine Learning. Lille: JMLR.org, 2015. 2408–2417.
                 [47]   Liu  DC,  Nocedal  J.  On  the  limited  memory  BFGS  method  for  large  scale  optimization.  Mathematical  Programming,  1989,  45(1–3):
                     503–528. [doi: 10.1007/bf01589116]
                 [48]   Zhu CY, Byrd RH, Lu PH, Nocedal J. Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound constrained optimization.
                     ACM Trans. on Mathematical Software (TOMS), 1997, 23(4): 550–560. [doi: 10.1145/279232.279236]
                 [49]   Gupta  V,  Koren  T,  Singer  Y.  Shampoo:  Preconditioned  stochastic  tensor  optimization.  In:  Proc.  of  the  35th  Int’l  Conf.  on  Machine
                     Learning. Stockholm: PMLR, 2018. 1842–1850.
                 [50]   Yao ZW, Gholami A, Shen S, Mustafa M, Keutzer K, Mahoney M. ADAHESSIAN: An adaptive second order optimizer for machine
                     learning. In: Proc. of the 2021 AAAI Conf. on Artificial Intelligence. Palo Alto: AIAA, 2021. 10665–10673. [doi: 10.1609/aaai.v35i12.
                     17275]
                 [51]   Kalman RE. A new approach to linear filtering and prediction problems. Journal of Basic Engineering, 1960, 82(1): 35–45. [doi: 10.1115/
                     1.3662552]
                 [52]   Witkoskie JB, Doren DJ. Neural network models of potential energy surfaces: Prototypical examples. Journal of Chemical Theory and
                     Computation, 2005, 1(1): 14–23. [doi: 10.1021/ct049976i]
                 [53]   Haykin S. Kalman Filtering and Neural Networks. Wiley Online Library, 2001. [doi: 10.1002/0471221546]
   192   193   194   195   196   197   198   199   200   201   202