We study a fair machine learning (ML) setting where an 'upstream' model developer is tasked with producing a fair ML model that will be used by several similar but distinct 'downstream' users. This setting introduces new challenges that are unaddressed by many existing fairness interventions, echoing existing critiques that current methods are not broadly applicable across the diversifying needs of real-world fair ML use cases. To this end, we address the up/down stream setting by adopting a distributional-based view of fair classification. Specifically, we introduce a new fairness definition, distributional parity, that measures disparities in the distribution of outcomes across protected groups, and present a post-processing method to minimize this measure using techniques from optimal transport. We show that our method is able that creates fairer outcomes for all downstream users, across a variety of fairness definitions, and works at inference time on unlabeled data. We verify this claim experimentally, through comparison to several similar methods and across four benchmark tasks. Ultimately we argue that fairer classification outcomes can be produced through the development of setting-specific interventions.