Page 518 - 《软件学报》2024年第4期
P. 518

2096                                                       软件学报  2024  年第  35  卷第  4  期


                     the 42nd Annual Int’l Symp. on Computer Architecture. Oregon: Association for Computing Machinery, 2015. 158–169. [doi: 10.1145/
                     2749469.2750392]
                  [4]  Hunter AH, Kennelly C, Turner P, Gove D, Moseley T, Ranganathan P. Beyond malloc efficiency to fleet efficiency: A hugepage-aware
                     memory allocator. In: Proc. of the 15th USENIX Symp. on Operating Systems Design and Implementation. New York: USENIX Association,
                     2021. 257–273.
                  [5]  Berger ED, Zorn BG, McKinley KS. Composing high-performance memory allocators. In: Proc. of the 2001 ACM SIGPLAN Conf. on
                     Programming Language Design and Implementation. Utah: Association for Computing Machinery, 2001. 114–124. [doi: 10.1145/378795.
                     378821]
                  [6]  Yun  H,  Mancuso  R,  Wu  ZP,  Pellizzoni  R.  PALLOC:  Dram  bank-aware  memory  allocator  for  performance  isolation  on  multicore
                     platforms. In: Proc. of the 19th IEEE Real-time and Embedded Technology and Applications Symp. Berlin: IEEE, 2014. 155–166. [doi:
                     10.1109/RTAS.2014.6925999]
                  [7]  Herter J, Backes P, Haupenthal F, Reineke J. CAMA: A predictable cache-aware memory allocator. In: Proc. of the 23rd Euromicro Conf.
                     on Real-time Systems. Porto: IEEE, 2011. 23–32. [doi: 10.1109/ECRTS.2011.11]
                  [8]  Qiu  JF,  Hua  ZH,  Fan  J,  Liu  L.  Evolution  of  memory  partitioning  technologies:  Case  study  through  page  coloring.  Ruan  Jian  Xue
                     Bao/Journal of Software, 2022, 33(2): 751–769 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6370.htm [doi: 10.
                     13328/j.cnki.jos.006370]
                  [9]  Roghanchi S, Eriksson J, Basu N. Ffwd: Delegation is (much) faster than you think. In: Proc. of the 26th Symp. on Operating Systems
                     Principles. Shanghai: Association for Computing Machinery, 2017. 342–358. [doi: 10.1145/3132747.3132771]
                 [10]  Hendler D, Incze I, Shavit N, Tzafrir M. Flat combining and the synchronization-parallelism tradeoff. In: Proc. of the 32nd Annual ACM
                     Symp. on Parallelism in Algorithms and Architectures. Santorini: Association for Computing Machinery, 2010. 355–364. [doi: 10.1145/
                     1810479.1810540]
                 [11]  Fatourou  P,  Kallimanis  ND.  Revisiting  the  combining  synchronization  technique.  In:  Proc.  of  the  17th  ACM  SIGPLAN  Symp.  on
                     Principles  and  Practice  of  Parallel  Programming.  Louisiana:  Association  for  Computing  Machinery,  2012.  257 –266.  [doi:  10.1145/
                     2145816.2145849]
                 [12]  Dice D, Marathe VJ, Shavit N. Flat-combining NUMA locks. In: Proc. of the 33rd Annual ACM Symp. on Parallelism in Algorithms and
                     Architectures. California: Association for Computing Machinery, 2011. 65–74. [doi: 10.1145/1989493.1989502]
                 [13]  Luchangco V, Nussbaum D, Shavit N. A hierarchical CLH queue lock. In: Proc of the 12th Int’l Conf. on Parallel Processing. Dresden:
                     Springer, 2006. 801–810. [doi: 10.1007/11823285_84]
                 [14]  Lozi JP, David F, Thomas G, Lawall JL, Muller G. Remote core locking: Migrating critical-section execution to improve the performance
                     of  multithreaded  applications.  In:  Proc.  of  the  2012  USENIX  Conf  on  Annual  Technical  Conf.  Boston:  USENIX  Association,  2012.
                     65–76.
                 [15]  Mellor-Crummey JM, Scott ML. Algorithms for scalable synchronization on shared-memory multiprocessors. ACM Trans. on Computer
                     Systems, 1991, 9(1): 21–65. [doi: 10.1145/103727.103729]
                 [16]  Bracha  G,  Cook  W.  Mixin-based  inheritance.  In:  Proc.  of  the  1990  European  Conf.  on  Object-oriented  Programming  Systems,
                     Languages, and Applications. Ottawa: ACM, 1990. 303–311. [doi: 10.1145/97945.97982]
                 [17]  Masmano M, Ripoll I, Crespo A, Real J. TLSF: A new dynamic memory allocator for real-time systems. In: Proc. of the 16th Euromicro
                     Conf. on Real-time Systems, 2004. ECRTS 2004. Catania: IEEE, 2004. 79–88. [doi: 10.1109/EMRTS.2004.1311009]
                 [18]  Berger ED, McKinley KS, Blumofe RD, Wilson PR. Hoard: A scalable memory allocator for multithreaded applications. In: Proc. of the
                     9th Int’l Conf. on Architectural Support for Programming Languages and Operating Systems. Cambridge: Association for Computing
                     Machinery, 2000. 117–128. [doi: 10.1145/378993.379232]
                 [19]  Kukanov A, Voss MJ. The foundations for scalable multi-core software in Intel threading building blocks. Intel Technology Journal,
                     2007, 11(4): 309–322.
                 [20]  Leijen D, Zorn BG, de Moura L. Mimalloc: Free list sharding in action. In: Proc. of the 17th Asian Symp. on Programming Languages
                     and Systems. Nusa Dua: Springer, 2019. 244–265. [doi: 10.1007/978-3-030-34175-6_13]
                 [21]  Liétar P, Butler T, Clebsch S, Drossopoulou S, Franco J, Parkinson MJ, Shamis A, Wintersteiger CM, Chisnall D. Snmalloc: A message
                     passing allocator. In: Proc. of the 2019 ACM SIGPLAN Int’l Symp. on Memory Management. Phoenix: Association for Computing Machinery,
                     2019. 122–135. [doi: 10.1145/3315573.3329980]
                 [22]  Berger  ED,  Zorn  BG.  Diehard:  Probabilistic  memory  safety  for  unsafe  languages.  In:  Proc.  of  the  27th  ACM  SIGPLAN  Conf.  on
                     Programming  Language  Design  and  Implementation.  Ottawa:  Association  for  Computing  Machinery,  2006.  158 –168.  [doi:  10.1145/
                     1133981.1134000]
   513   514   515   516   517   518   519   520   521   522