Page 198 - 《软件学报》2021年第5期
P. 198

1422                                     Journal of Software  软件学报 Vol.32, No.5,  May 2021

                    (3)  提供更高的准确性
                    构建公平且可靠的算法是可信机器学习算法的基础.
                    公平机器学习的第 5 个挑战是如何权衡算法性能与公平:当受保护属性与预测结果相关时,如累犯预测,很
                 难建立不包含与种族相关的分数,如果排除贫穷、失业和社会边缘化,准确率会下降.因此,我们需要进一步探索
                 权衡准确度和公平性的方式.

                 6    结   论

                    公平性是一种具有相对性的社会概念,绝对意义上的公平是不存在的.公平机器学习算法通过探索消除不
                 公平的机制,逐步完善机器学习算法的公平性.公平表征、公平建模和公平决策是可信机器学习公平性的 3 个
                 关键环节,有效定位并解决这 3 个环节的不公平问题,对未来公平机器学习算法的研究和发展具有重要意义.公
                 平具有在法律、社会层次的意义,不完全是一个技术问题,可信机器学习中的公平性研究可以认为是一个社会
                 学与计算机科学的交叉研究领域.在未来工作中,需要探究技术、应用和伦理等多方面的公平问题,部署先进的
                 公平机器学习算法于各应用领域,并形成统一且完整的公平性度量.


                 References:
                 [1]    Zhou ZH. Machine Learning. Beijing: Tsinghua University Press, 2016 (in Chinese).
                 [2]    He JF. Safe and trustworthy artificial intelligence. Information Security and Communications Privacy, 2019(10):5−8 (in Chinese).
                 [3]    Meng XF, Wang LX, Liu JX. Data privacy, monopoly and fairness for AI. Big Data Research, 2020,6(1):35−46 (in Chinese with
                     English abstract).
                 [4]    Liu RX, Chen H, Guo RY, Zhao D, Liang WJ, Li CP. Survey on privacy attacks and defenses in machine learning. Ruan Jian Xue
                     Bao/Journal of Software, 2020,31(3):866−892 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/5904.htm [doi:
                     10.13328/j.cnki.jos.005904]
                 [5]    Tan ZW, Zhang LF. Survey on privacy preserving techniques for machine learning. Ruan Jian Xue Bao/Journal of Software, 2020,
                     31(7):2127−2156 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6052.htm [doi: 10.13328/j.cnki.jos.006052]
                 [6]    Cheng KY, Wang N, Shi WX, Zhan YZ. Research advances in the interpretability of deep learning. Journal of Computer Research
                     and Development, 2020,57(6):1208−1217 (in Chinese with English abstract).
                 [7]    Saxena NA, Huang K, DeFilippis E, Radanovic G, Parkes DC, Liu Y. How do fairness definitions fare? Examining public attitudes
                     towards algorithmic definitions of fairness. In: Proc. of the AIES. 2019. 99−106.
                 [8]    Kusner MJ, Loftus JR, Russell C, Silva R. Counterfactual fairness. In: Proc. of the NIPS. 2017. 4066−4076.
                 [9]    Suresh H, Guttag JV. A framework for understanding unintended consequences of machine learning. arXiv preprint arXiv:1901.
                     10002v3, 2019.
                [10]    Mehrabi N, Morstatter  F,  Saxena N,  Lerman K, Aram  G. A  survey  on bias and  fairness  in machine  learning. arXiv  preprint
                     arXiv:1908.09635v2, 2019.
                [11]    Baeza-Yates R. Bias on the Web. Communications of the ACM, 2018,61(6):54−61.
                [12]    Silva S, Kenney M. Algorithms, platforms, and ethnic bias. Communications of the ACM, 2019,62(11):37−39.
                [13]    Alessandro BD, Neil CO, LaGatta T. Conscientious classification: A data scientist’s guide to discrimination-aware classification.
                     arXiv preprint arXiv:1907.09013v1, 2019.
                [14]    Chouldechova A, Roth A. The frontiers of fairness in machine learning. arXiv preprint arXiv:1810.08810v1, 2018.
                [15]    Hardt M, Price E, Srebro N. Equality of opportunity in supervised learning. In: Proc. of the NIPS. 2016. 3315−3323.
                [16]    Chen IY, Johansson FD, Sontag DA. Why is my classifier discriminatory? In: Proc. of the NeurIPS. 2018. 3543−3554.
                [17]    Buolamwini J,  Gebru  T.  Gender shades: Intersectional  accuracy disparities in  commercial  gender  classification. In: Proc. of  the
                     FAT. 2018. 77−91.
                [18]    Dwork C, Hardt M, Pitassi T, Reingold O, Zemel RS. Fairness through awareness. In: Proc. of the ITCS. 2012. 214−226.
                [19]    Friedler  SA,  Scheidegger C,  Venkatasubramanian  S, Choudhary  S, Hamilton EP, Roth D. A comparative  study  of  fairness-
                     enhancing interventions in machine learning. In: Proc. of the FAT. 2019. 329−338.
   193   194   195   196   197   198   199   200   201   202   203