Page 200 - 《软件学报》2021年第5期
P. 200

1424                                     Journal of Software  软件学报 Vol.32, No.5,  May 2021

                [48]    Agarwal A, Dudík M, Wu ZWS. Fair regression: Quantitative definitions and reduction-based algorithms. In: Proc. of the ICML.
                     2019. 120−129.
                [49]    Conitzer V, Freeman R, Shah N, Vaughan JW. Group fairness for the allocation of indivisible goods. In: Proc. of the AAAI. 2019.
                     1853−1860.
                [50]    Kusner  MJ,  Russell  C,  Loftus JR, Silva  R. Making decisions that reduce  discriminatory  impacts. In: Proc. of the  ICML. 2019.
                     3591−3600.
                [51]    Ustun B, Liu Y, Parkes DC. Fairness without harm: Decoupled classifiers with preference guarantees. In: Proc. of the ICML. 2019.
                     6373−6382.
                [52]    Tsang A, Wilder B, Rice E, Tambe M, Zick Y. Group-fairness in influence maximization. In: Proc. of the IJCAI. 2019. 5997−6005.
                [53]    Chen XY, Fain B, Lyu L, Munagala K. Proportionally fair clustering. In: Proc. of the ICML. 2019. 1032−1041.
                [54]    Jiang R, Pacchiano A, Stepleton T, Jiang H, Chiappa S. Wasserstein fair classification. In: Proc. of the UAI. 2019. 862−872.
                [55]    Jagielski M, Kearns MJ, Mao JM, Oprea A, Roth A, Sharifi-Malvajerdi S, Ullman J. Differentially private fair learning. In: Proc. of
                     the ICML. 2019. 3000−3008.
                [56]    Wang TL, Zhao JY, Yatskar M, Chang KW, Ordonez V. Balanced datasets are not enough: Estimating and mitigating gender bias
                     in deep image representations. In: Proc. of the ICCV. 2019. 5309−5318.
                [57]    Quadrianto  N, Sharmanska  V,  Thomas  O.  Discovering fair  representations  in the data domain. In: Proc. of the  CVPR. 2019.
                     8227−8236.
                [58]    DeVries T, Misra I, Wang CH, Maaten LVD. Does object recognition work for everyone? In: Proc. of the CVPR Workshops. 2019.
                     52−59.
                [59]    Wang ZY, Qinami K, Karakozis IC, Genova K, Nair P, Hata K, Russakovsky O. Towards fairness in visual recognition: Effective
                     strategies for bias mitigation. In: Proc. of the CVPR. 2020. 8916−8925.
                [60]    Reid P, Martinez RD, Dass N, Kurohashi S, Jurafsky D, Yang DY. Automatically neutralizing subjective bias in text. In: Proc. of
                     the AAAI. 2020. 480−489.
                [61]    Bolukbasi T, Chang KW, Zou JY, Saligrama V, Kalai AT. Man is to computer programmer as woman is to homemaker? Debiasing
                     word embeddings. In: Proc. of the NIPS. 2016. 4349−4357.
                [62]    Zhao JY, Wang TL, Yatskar M, Ordonez V, Chang KW. Men also like shopping: reducing gender bias amplification using corpus-
                     level constraints. In: Proc. of the EMNLP. 2017. 2979−2989.
                [63]    Zhao JY, Wang TL, Yatskar M, Ordonez V, Chang KW. Gender bias in coreference resolution: Evaluation and debiasing methods.
                     In: Proc. of the NAACL-HLT, Vol.2. 2018. 15−20.
                [64]    Stanovsky G, Smith NA, Zettlemoyer L. Evaluating gender bias in machine translation. In: Proc. of the ACL. 2019. 1679−1684.
                [65]    Du YP, Wu YB, Lan M. Exploring human gender stereotypes with word association test. In: Proc. of the EMNLP/IJCNLP. 2019.
                     6132−6142.
                [66]    Papakyriakopoulos O, Hegelich S, Serrano JCM, Marco F. Bias in word embeddings. In: Proc. of the FAT*. 2020. 446−457.
                [67]    Ma PC, Wang S, Liu J. Metamorphic testing and certified mitigation of fairness violations in NLP models. In: Proc. of the IJCAI.
                     2020. 458−465.
                [68]    Badilla P,  Marquez FB, Pérez J.  WEFE:  The  word  embeddings fairness  evaluation framework. In: Proc. of the  IJCAI. 2020.
                     430−436.
                [69]    Yadav H, Du ZX, Joachims T. Fair learning-to-rank from implicit feedback. arXiv preprint arXiv:1911.08054v1, 2019.
                [70]    Beutel A, Chen JL, Doshi T, Qian H, Wei L, Wu Y, Heldt L, Zhao Z, Hong LC, Chi EH, Goodrow C. Fairness in recommendation
                     ranking through pairwise comparisons. In: Proc. of the KDD. 2019. 2212−2220.
                [71]    Singh A, Joachims T. Policy learning for fairness in ranking. In: Proc. of the NeurIPS. 2019. 5427−5437.
                [72]    Patro GK, Biswas A, Ganguly N, Gummadi KP, Chakraborty A. FairRec: Two-sided fairness for personalized recommendations in
                     two-sided platforms. In: Proc. of the WWW. 2020. 1194−1204.
                [73]    Patro GK, Chakraborty A, Ganguly N, Gummadi KP. Fair  updates  in  two-sided market  platforms: On  incrementally  updating
                     recommendations. In: Proc. of the AAAI. 2020. 181−188.
   195   196   197   198   199   200   201   202   203   204   205