Page 201 - 《软件学报》2021年第5期
P. 201
刘文炎 等:可信机器学习的公平性综述 1425
[74] Chakraborty A, Patro GK, Ganguly N, Gummadi KP, Loiseau P. Equality of voice: Towards fair representation in crowdsourced
top-K recommendations. In: Proc. of the FAT. 2019. 129−138.
[75] Fu ZH, Xian YK, Gao RY, Zhao JY, Huang QY, Ge YQ, Xu SY, Geng SJ, Shah C, Zhang YF, Melo GD. Fairness-aware
explainable recommendation over knowledge graphs. In: Proc. of the SIGIR. 2020. 69−78.
[76] Geyik SC, Ambler S, Kenthapadi K. Fairness-aware ranking in search & recommendation systems with application to LinkedIn
talent search. In: Proc. of the KDD. 2019. 2221−2231.
[77] Morik M, Singh A, Hong J, Joachims T. Controlling fairness and bias in dynamic learning-to-rank. In: Proc. of the SIGIR. 2020.
429−438.
[78] Hughes E, Leibo JZ, Phillips M, Tuyls K, Duéñez-Guzmán EA, Castañeda AG, Dunning I, Zhu T, McKee KR, Koster R, Roff H,
Graepel T. Inequity aversion improves cooperation in intertemporal social dilemmas. In: Proc. of the NeurIPS. 2018. 3330−3340.
[79] Zhang CJ, Shah JA. Fairness in multi-agent sequential decision-making. In: Proc. of the NIPS. 2014. 2636−2644.
[80] Jiang JC, Lu ZQ. Learning fairness in multi-agent systems. In: Proc. of the NeurIPS. 2019. 13854−13865.
[81] Zhou J, Wang L, Wang L, Zheng XL. Shared learning: Ant financial’s solution. Communications of the CCF, 2020,15(6):51−57 (in
Chinese).
[82] Li T, Sanjabi M, Beirami A, Smith V. Fair resource allocation in federated learning. In: Proc. of the ICLR. 2020.
[83] Mohri M, Sivek G, Suresh AT. Agnostic federated learning. In: Proc. of the ICML. 2019. 4615−4625.
[84] Kohavi R. Scaling up the accuracy of Naive-Bayes classifiers: A decision-tree hybrid. In: Proc. of the KDD. 1996. 202−207.
[85] Wightman LF. LSAC national longitudinal bar passage study. Research Report, Law School Admission Council, 1998.
[86] Merler M, Ratha NK, Feris RS, Smith JR. Diversity in faces. arXiv preprint arXiv:1901.10436v6, 2019.
[87] Horn GV, Aodha OM, Song Y, Cui Y, Sun C, Shepard A, Adam H, Perona P, Belongie SJ. The INaturalist species classification
and detection dataset. In: Proc. of the CVPR. 2018. 8769−8778.
[88] Bagdasaryan E, Poursaeed O, Shmatikov V. Differential privacy has disparate impact on model accuracy. In: Proc. of the NeurIPS.
2019. 15453−15462.
[89] D’Amour A, Srinivasan H, Atwood J, Baljekar P, Sculley D, Halpern Y. Fairness is not static: Deeper understanding of long term
fairness via simulation studies. In: Proc. of the FAT*. 2020. 525−534.
[90] Liu LT, Dean S, Rolf E, Simchowitz M, Hardt M. Delayed impact of fair machine learning. In: Proc. of the ICML. 2018.
3156−3164.
[91] Nori H, Jenkins S, Koch P, Caruana R. InterpretML: A unified framework for machine learning interpretability. arXiv preprint
arXiv:1909.09223v1, 2019.
[92] Bellamy RKE, Dey K, Hind M, Hoffman SC, Houde S, Kannan K, Lohia P, Martino J, Mehta S, Mojsilovic A, Nagar S,
Ramamurthy KN, Richards JT, Saha D, Sattigeri P, Singh M, Varshney KR, Zhang TF. AI fairness 360: An extensible toolkit for
detecting and mitigating algorithmic bias. IBM Journal of Research and Development, 2019,63(4):1−15.
附中文参考文献:
[1] 周志华.机器学习.北京:清华大学出版社,2016.
[2] 何积丰.安全可信人工智能.信息安全与通讯保密,2019(10):5−8.
[3] 孟小峰,王雷霞,刘俊旭.人工智能时代的数据隐私、垄断与公平.大数据,2020,6(1):35−46.
[4] 刘睿瑄,陈红,郭若杨,赵丹,梁文娟,李翠平.机器学习中的隐私攻击与防御.软件学报,2020,31(3):866−892. http://www.jos.org.cn/
1000-9825/5904.htm [doi: 10.13328/j.cnki.jos.005904]
[5] 谭作文,张连福.机器学习隐私保护研究综述.软件学报,2020,31(7):2127−2156. http://www.jos.org.cn/1000-9825/6052.htm [doi:
10.13328/j.cnki.jos.006052]
[6] 成科扬,王宁,师文喜,詹永照.深度学习可解释性研究进展.计算机研究与发展,2020,57(6):1208−1217.
[81] 周俊,王力,王磊,郑小林.蚂蚁金服共享智能实践.中国计算机学会通讯,2020,15(6):51−57.