Page 202 - 《软件学报》2021年第12期
P. 202

3866                                Journal of Software  软件学报 Vol.32, No.12, December 2021

         仅少量回传模型参数即可达到与回收模型相同的收敛效果,以大幅降低通信量.

         References:
          [1]    Subramaniyaswamy V,  Logesh  R, Indragandhi  V. Intelligent  sports  commentary recommendation system for individual  cricket
             players. Int’l Journal of Advanced Intelligence Paradigms, 2018,10(1-2):103−117.
          [2]    Manogaran G, Varatharajan R, Priyan MK. Hybrid recommendation system for heart disease diagnosis based on multiple kernel
             learning with adaptive neuro-fuzzy inference system. Multimedia Tools and Applications, 2018,77(4):4379−4399.
          [3]    Shin H, Kim S, Shin J, et al. Privacy enhanced matrix factorization for recommendation with local differential privacy. IEEE Trans.
             on Knowledge and Data Engineering, 2018,30(9):1770−1782.
          [4]    McMahan B, Moore E, Ramage D, et al. Communication-efficient learning of deep networks from decentralized data. In: Proc. of
             the 20th Int’l Conf. on Artificial Intelligence and Statistics. Fort Lauderdale: PMLR, 2017. 1273−1282.
          [5]    Bonawitz K, Ivanov V, Kreuter B, et al. Practical secure aggregation for privacy-preserving machine learning. In: Proc. of the 2017
             ACM SIGSAC Conf. on Computer and Communications Security. Dallas: ACM, 2017. 1175−1191.
          [6]    Nasr  M, Shokri R, Houmansadr  A.  Comprehensive privacy  analysis of deep  learning: Stand-alone  and federated learning under
             passive and active white-box inference attacks. In: Proc. of the IEEE Symp. on Security and Privacy (SP). San Francisco: IEEE,
             2019. 739−753.
          [7]    Radio Spectrum Policy Group. RSPG report on the results of the public consultation on the Reviewof the EU Telecommunications
             Framework. Technical Report, 2016. http://spectrum.welter.fr/international/rspg/reports/rspg-report-2016-framework-review.pdf
          [8]    Huang K, Zhu G,  You  C,  et al.  Communication,  computing,  and learning on the  edge. In: Proc. of  the IEEE Int’l  Conf. on
             Communication Systems (ICCS). Chengdu: IEEE, 2018. 268−273.
          [9]    Song X, Feng F, Han X, et al. Neural compatibility modeling with attentive knowledge distillation. In: Proc. of the 41st Int’l ACM
             SIGIR Conf. on Research and Development in Information Retrieval. New York: ACM, 2018. 5−14.
         [10]    Jeong E, Oh S, Kim H. Communication-efficient on-device machine learning: Federated distillation and augmentation under non-
             IID private data. In: Proc. of  the 2nd  Workshop on Machine  Learning  on the Phone  and other  Consumer  Devices  (MLPCD 2).
             Montréal: JMLR, 2018.
         [11]    Luo L, Huang  W,  Zeng Q. Learning  personalized end-to-end  goal-oriented  dialog. In:  Proc.of the AAAI Conf.  on  Artificial
             Intelligence, Vol.33. Honolulu: AAAI, 2019. 6794−6801.
         [12]    Yang Q, Liu Y, Chen T,  et al.  Federated machine  learning: Concept and  applications. ACM Trans.  on Intelligent  Systems and
             Technology (TIST), 2019,10(2):1−19.
         [13]    Li T,  Sahu  AK, Talwalkar A,  et al. Federated learning:  Challenges,  methods,  and future directions. IEEE  Signal Processing
             Magazine, 2020,37(3):50−60.
         [14]    Smith V, Chiang  CK, Sanjabi M. Federated  multi-task learning. In: Proc. of the  Advances in  Neural Information Processing
             Systems (NIPS). Long Beach: Curran Associates, Inc., 2017. 4424−4434.
         [15]    Gao D, Liu Y, Huang A, et al. Privacy-preserving heterogeneous federated transfer learning. In: Proc. of the 2019 IEEE Int’l Conf.
             on Big Data. Los Angeles: IEEE, 2019. 2552−2559.
         [16]    Nadiger C, Kumar A, Abdelhak S. Federated reinforcement learning for fast personalization. In: Proc. of the 2019 IEEE 2nd Int’l
             Conf. on Artificial Intelligence and Knowledge Engineering (AIKE). Sardinia: IEEE, 2019. 123−127.
         [17]    Li Q, Wen Z, He B. Practical federated gradient boosting decision trees. In: Proc. of the 34th AAAI Conf. on Artificial Intelligence
             (AAAI 2020). New York: AAAI, 2020. 4642−4649.
         [18]    Yurochkin M, Agarwal M, Ghosh S, et al. Bayesian nonparametric federated learning of neural networks. In: Proc. of the Int’l Conf.
             on Machine Learning. Long Beach: PMLR, 2019. 7252−7261.
         [19]    Liu Y, Kang Y, Xing C, et al. A secure federated transfer learning framework. IEEE Intelligent Systems. 2020,35(4):70−82. [doi:
             10.1109/MIS.2020.2988525]
         [20]    Nadiger C, Kumar A, Abdelhak S. Federated reinforcement learning for fast personalization. In: Proc. of the IEEE 2nd Int’l Conf.
             on Artificial Intelligence and Knowledge Engineering (AIKE). Sardinia: IEEE, 2019. 123−127.
         [21]    Ke G, Meng Q, Finley T, et al. Lightgbm: A highly efficient gradient boosting decision tree. In: Advances in Neural Information
             Processing Systems. Hangzhou: IEEE, 2017. 3146−3154.
         [22]    Sharma S, Chen K. Privacy-preserving  boosting with random linear classifiers.  In:  Proc.  of  the 2018  ACM SIGSAC Conf.  on
             Computer and Communications Security. Toronto: ACM, 2018. 2294−2296.
   197   198   199   200   201   202   203   204   205   206   207