Page 92 - 《软件学报》2021年第8期
P. 92
2374 Journal of Software 软件学报 Vol.32, No.8, August 2021
基于 HIP 也可以实现 SYCL 统一规范 [38] .
3 总 结
本文基于高性能计算应用软件的现状趋势,对宇宙 N 体模拟、大气与地球系统模式、计算材料相场动力学、
分子动力学、量子计算化学和格点量子色力学等应用开展了并行计算特征分析,提炼了应用的典型算法和软件
的共性问题,并开展了解决的对策讨论.
在国产异构计算高性能计算环境下,由于国产架构不同于主流的体系架构,其算法实现和软件设计有很大
差异.另外我国自主的硬件体系架构设计起步较晚,所构建的软件生态环境包括系统工具和程序调试工具等等
还远不成熟,给应用程序开发团队开发程序增加了现实难度.为了有效开展基于国产高性能异构计算算法和软
件设计,现给出两点发展建议:
(1) 尽快建立我国高性能计算的统一软硬件标准.可从指令集、API 接口、软件架构、网格环境对接等方
面建立统一规范.
(2) 兼容已有国际主流标准并积极参与国际规范标准的制定,提高国产超级计算平台与开源软件的兼容性.
References:
[1] Ezell SJ, Atkinson RD. The vital importance of high performance computing to U.S. competitiveness. 2016. http://www2.itif.org/
2016-high-performance-computing.pdf
[2] Dongarra J, et al. Supercomputer Top500 list. 2019. https://www.top500.org/lists
[3] Jin Z, Lu ZH, Li HY, Chi XB, Sun JC. Origin of high performance computing—Current status and developments of scientific
computing applications. Bulletin of Chinese Academy of Sciences, 2019, 34(6):625−639 (in Chinese with English abstract).
[4] USQCD Collaboration. US lattice quantum chromodynamics. 2019. https://www.usqcd.org
[5] DeTar Carleton. MILC code. 2019. http://www.physics.utah.edu/~detar/milc/milc_qcd.html
[6] Boyle P, Cossu G, Yamaguchi A, Portelli A. Grid: A next generation data parallel C++ QCD library. arXiv:1512.03487v1, 2015.
[7] Clark MA, Babich R, Barros K, Brower R, Rebbi C. Solving lattice QCD systems of equations using mixed precision solvers on
GPUs. Computer Physics Communication, 2010,181:1517−1528.
[8] Plimpton S. Fast parallel algorithms for short-range molecular dynamics. Journal of Computational Physics, 1995,117:1−19.
[9] Phillips JC, Braun R, Wang W, Gumbart J, Tajkhorshid E, Villa E, Chipot C, Skeel RD, Kale L, Schulten K. Scalable molecular
dynamics with NAMD. Journal of Computational Chemistry, 2005,26(16):1781−1802.
[10] Valiev M, Bylaska EJ, Govind N, Kowalski K, Straatsma TP, van Dam HJJ, Wang D, Nieplocha J, Apra E, Windus TL, de Jong
WA. NWChem: A comprehensive and scalable open-source solution for large scale molecular simulations. Computer Physics
Communications, 2010,181(9):1477−1489.
[11] Dongarra J, Foster IAN, Fox G, Gropp W, Kennedy K, Torczon L, White A. Source Book of Parallel Computing. Morgan
Kaufmann Publishers, 2003.
[12] Kothe DB, Diachin L. The exascale computing project. 2019. https://www.exascaleproject.org
[13] Mo ZY, Zhang AQ, Liu QK, Cao XL. Parallel algorithm and parallel programming: From specialty to generality as well as software
reuse. Science in China (Information Sciences), 2016,46(10):1392−1410 (in Chinese with English abstract).
[14] Mo ZY, Zhang AQ, Cao XL, Liu QK, Xu XW, An HB, Pei WB, Zhu S. JASMIN: A parallel software infrastructure for scientific
computing. Frontiers Computer Science in China, 2010,4(4):480−488.
[15] Liu QK, Zhao WB, Cheng J, Mo ZY, Zhang AQ, Liu JJ. A programming framework for large scale numerical simulations on
unstructured mesh. In: Proc. of the IEEE Int’l Conf. on High Performance and Smart Computing (IEEE HPSC). New York, 2016.
310−315.
[16] Chi XB, et al. National High Performance Computing Environment Development Report. Beijing: Science Press, 2018 (in
Chinese).
[17] Colella P. Defining software requirements for scientific computing. 2004. http://www.lanl.gov/orgs/hpc/salishan/salishan2005/
davidpatterson.pdf