Emergent Unfairness: Normative Assumptions and Contradictions in Algorithmic FairnessAccuracy TradeOff Research
Abstract
Across machine learning (ML) subdisciplines, researchers make explicit mathematical assumptions in order to facilitate proofwriting. We note that, specifically in the area of fairnessaccuracy tradeoff optimization scholarship, similar attention is not paid to the normative assumptions that ground this approach. Such assumptions include that 1) accuracy and fairness are in inherent opposition to one another, 2) strict notions of mathematical equality can adequately model fairness, 3) it is possible to measure the accuracy and fairness of decisions independent from historical context, and 4) collecting more data on marginalized individuals is a reasonable solution to mitigate the effects of the tradeoff. We argue that such assumptions, which are often left implicit and unexamined, lead to inconsistent conclusions: While the intended goal of this work may be to improve the fairness of machine learning models, these unexamined, implicit assumptions can in fact result in emergent unfairness. We conclude by suggesting a concrete path forward toward a potential resolution.
 Publication:

arXiv eprints
 Pub Date:
 February 2021
 arXiv:
 arXiv:2102.01203
 Bibcode:
 2021arXiv210201203F
 Keywords:

 Computer Science  Computers and Society