Publications

Detailed Information

Learning fair prediction models with an imputed sensitive variable: Empirical studies

DC Field Value Language
dc.contributor.authorKim, Yongdai-
dc.contributor.authorJeong, Hwichang-
dc.date.accessioned2022-06-24T00:31:51Z-
dc.date.available2022-06-24T00:31:51Z-
dc.date.created2022-05-06-
dc.date.issued2022-03-
dc.identifier.citationCommunications for Statistical Applications and Methods, Vol.29 No.2, pp.251-261-
dc.identifier.issn2287-7843-
dc.identifier.urihttps://hdl.handle.net/10371/183918-
dc.description.abstractAs AI has a wide range of influence on human social life, issues of transparency and ethics of AI are emerging. In particular, it is widely known that due to the existence of historical bias in data against ethics or regulatory frameworks for fairness, trained AI models based on such biased data could also impose bias or unfairness against a certain sensitive group (e.g., non-white, women). Demographic disparities due to AI, which refer to socially unacceptable bias that an AI model favors certain groups (e.g., white, men) over other groups (e.g., black, women), have been observed frequently in many applications of AI and many studies have been done recently to develop AI algorithms which remove or alleviate such demographic disparities in trained AI models. In this paper, we consider a problem of using the information in the sensitive variable for fair prediction when using the sensitive variable as a part of input variables is prohibitive by laws or regulations to avoid unfairness. As a way of reflecting the information in the sensitive variable to prediction, we consider a two-stage procedure. First, the sensitive variable is fully included in the learning phase to have a prediction model depending on the sensitive variable, and then an imputed sensitive variable is used in the prediction phase. The aim of this paper is to evaluate this procedure by analyzing several benchmark datasets. We illustrate that using an imputed sensitive variable is helpful to improve prediction accuracies without hampering the degree of fairness much.-
dc.language영어-
dc.publisher한국통계학회-
dc.titleLearning fair prediction models with an imputed sensitive variable: Empirical studies-
dc.typeArticle-
dc.identifier.doi10.29220/CSAM.2022.29.2.251-
dc.citation.journaltitleCommunications for Statistical Applications and Methods-
dc.identifier.wosid000782915600008-
dc.identifier.scopusid2-s2.0-85129348566-
dc.citation.endpage261-
dc.citation.number2-
dc.citation.startpage251-
dc.citation.volume29-
dc.identifier.kciidART002823076-
dc.description.isOpenAccessN-
dc.contributor.affiliatedAuthorKim, Yongdai-
dc.type.docTypeArticle-
dc.description.journalClass1-
Appears in Collections:
Files in This Item:
There are no files associated with this item.

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share