The use of machine learning systems in processing job applications has made the process agile and efficient, but at the same time it has created problems in terms of equality, reliability and transparency. In this paper we explain some of the uses of ML in job selection processes in the United States, and we present some the racial and sexual biases that have been detected. There are both practical and legal obstacles that impede the detection and analysis of these biases. It is also unclear how to approach algorithmic discrimination from a legal point of view. A possible analytical tool is provided by the American doctrine of Disparate Impact, but we show some of its limitations and problems when adapted to other legal systems, such as Colombian law. To conclude, we offer some desiderata that any legal analysis of algorithmic discrimination should provide.