Learning from Discriminatory Training Data
Abstract
Supervised learning systems are trained using historical data and, if the data was tainted by discrimination, they may unintentionally learn to discriminate against protected groups. We propose that fair learning methods, despite training on potentially discriminatory datasets, shall perform well on fair test datasets. Such dataset shifts crystallize application scenarios for specific fair learning methods. For instance, the removal of direct discrimination can be represented as a particular dataset shift problem. For this scenario, we propose a learning method that provably minimizes model error on fair datasets, while blindly training on datasets poisoned with direct additive discrimination. The method is compatible with existing legal systems and provides a solution to the widely discussed issue of protected groups' intersectionality by striking a balance between the protected groups. Technically, the method applies probabilistic interventions, has causal and counterfactual formulations, and is computationally lightweight - it can be used with any supervised learning model to prevent discrimination via proxies while maximizing model accuracy for business necessity.
- Publication:
-
arXiv e-prints
- Pub Date:
- December 2019
- DOI:
- 10.48550/arXiv.1912.08189
- arXiv:
- arXiv:1912.08189
- Bibcode:
- 2019arXiv191208189G
- Keywords:
-
- Computer Science - Machine Learning;
- Computer Science - Computers and Society;
- Physics - Physics and Society;
- I.2.6;
- K.4.1
- E-Print:
- 16 pages, 14 figures, 1 table