Page 199 - 《软件学报》2021年第5期
P. 199

刘文炎  等:可信机器学习的公平性综述                                                             1423


                [20]    Calders T, Verwer S. Three naive Bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery,
                     2010,21(2):277−292.
                [21]    Edwards H, Storkey AJ. Censoring representations with an adversary. In: Proc. of the ICLR (Poster). 2016.
                [22]    Louizos C, Swersky K, Li YJ, Welling M, Zemel RS. The variational fair autoencoder. In: Proc. of the ICLR. 2016.
                [23]    Zemel RS, Wu Y, Swersky K, Pitassi T, Dwork C. Learning fair representations. In: Proc. of the ICML. 2013. 325−333.
                [24]    Madras D, Creager E, Pitassi T, Zemel RS. Learning adversarially fair and transferable representations. In: Proc. of the ICML. 2018.
                     3381−3390.
                [25]    Adel T, Valera I, Ghahramani Z, Weller A. One-network adversarial fairness. In: Proc. of the AAAI. 2019. 2412−2420.
                [26]    Feldman M, Friedler SA, Moeller J, Scheidegger C, Venkatasubramanian S. Certifying and removing disparate impact. In: Proc. of
                     the KDD. 2015. 259−268.
                [27]    Zafar MB, Valera I, Gomez-Rodriguez M, Gummadi KP. Fairness constraints: Mechanisms for fair classification. In: Proc. of the
                     AISTATS. 2017. 962−970.
                [28]    Berk R, Heidari H, Jabbari S, Michael K, Roth A. Fairness in criminal justice risk assessments: The state of the art. In: Proc. of the
                     Sociological Methods and Research. 2018. 3−44.
                [29]    Kleinberg JM, Mullainathan S, Raghavan M. Inherent trade-offs in the fair determination of risk scores. In: Proc. of the ITCS. 2017.
                     1−23.
                [30]    Chouldechova A. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 2017,5(2):
                     153−163.
                [31]    Pearl J. Causality. Cambridge University Press, 2009.
                [32]    Chiappa S. Path-specific counterfactual fairness. In: Proc. of the AAAI. 2019. 7801−7808.
                [33]    Loftus JR, Russell C, Kusner MJ, Silva R. Causal reasoning for algorithmic fairness. arXiv preprint arXiv:1805.05859v1, 2018.
                [34]    Beutel A, Chen JL, Zhao Z, Chi EH. Data decisions and theoretical implications when adversarially learning fair representations.
                     arXiv preprint arXiv:1707.00075v2, 2017.
                [35]    Zhao H, Gordon GJ. Inherent tradeoffs in learning fair representations. In: Proc. of the NeurIPS. 2019. 15649−15659.
                [36]    Khosravifard M, Fooladivanda D, Gulliver TA. Confliction of the convexity and metric properties in f-divergences. IEICE Trans.
                     on Fundamentals of Electronics, Communications and Computer Sciences, 2017,90(9):1848−1853.
                [37]    Zhao H, Coston A, Adel T, Gordon GJ. Conditional learning of fair representations. In: Proc. of the ICLR. 2020.
                [38]    Xu DP, Yuan SH, Zhang L, Wu XT. FairGAN: Fairness-aware generative adversarial networks. In: Proc. of the BigData. 2018.
                     570−575.
                                                   +
                [39]    Xu DP, Yuan SH, Zhang L, Wu XT. FairGAN : Achieving fair data generation and classification through generative adversarial
                     nets. In: Proc. of the BigData. 2019. 1401−1406.
                [40]    Xu DP, Wu YK, Yuan SH, Zhang L, Wu XT. Achieving causal fairness through generative adversarial networks. In: Proc. of the
                     IJCAI. 2019. 1452−1458.
                [41]    Kocaoglu M,  Snyder C, Dimakis  AG, Vishwanath  S. CausalGAN: Learning causal  implicit  generative models  with adversarial
                     training. In: Proc. of the ICLR (Poster). 2018.
                [42]    Creager  E,  Madras  D, Jacobsen JH, Weis MA, Swersky  K, Pitassi  T,  Zemel  RS. Flexibly fair representation  learning by
                     disentanglement. In: Proc. of the ICML. 2019. 1436−1445.
                [43]    Gordaliza P,  Barrio E,  Gamboa F,  Loubes  JM.  Obtaining fairness using  optimal transport theory. In: Proc. of the ICML. 2019.
                     2357−2365.
                [44]    Bechavod Y, Ligett K. Learning fair classifiers: A regularization-inspired approach. arXiv preprint arXiv:1707.00044v3, 2017.
                [45]    Chris R, Kusner MJ, Loftus JR, Silva R. When worlds collide: integrating different counterfactual assumptions in fairness. In: Proc.
                     of the NIPS. 2017. 6414−6423.
                [46]    Wu YK, Zhang L, Wu XT, Tong HH. PC-fairness: A unified framework for measuring causality-based fairness. In: Proc. of the
                     NeurIPS. 2019. 3399−3409.
                [47]    Wu YK,  Zhang  L, Wu  XT.  Counterfactual fairness:  Unidentification, bound  and  algorithm. In: Proc. of the IJCAI. 2019.
                     1438−1444.
   194   195   196   197   198   199   200   201   202   203   204