Bounding and Approximating Intersectional Fairness through Marginal Fairness - Archive ouverte HAL Access content directly
Conference Papers Year :

Bounding and Approximating Intersectional Fairness through Marginal Fairness

(1, 2) , (1)


Discrimination in machine learning often arises along multiple dimensions (a.k.a. protected attributes); it is then desirable to ensure intersectional fairness-i.e., that no subgroup is discriminated against. It is known that ensuring marginal fairness for every dimension independently is not sufficient in general. Due to the exponential number of subgroups, however, directly measuring intersectional fairness from data is impossible. In this paper, our primary goal is to understand in detail the relationship between marginal and intersectional fairness through statistical analysis. We first identify a set of sufficient conditions under which an exact relationship can be obtained. Then, we prove bounds (easily computable through marginal fairness and other meaningful statistical quantities) in highprobability on intersectional fairness in the general case. Beyond their descriptive value, we show that these theoretical bounds can be leveraged to derive a heuristic improving the approximation and bounds of intersectional fairness by choosing, in a relevant manner, protected attributes for which we describe intersectional subgroups. Finally, we test the performance of our approximations and bounds on real and synthetic data-sets.
Fichier principal
Vignette du fichier
MolinaLoiseau_IntersectionalFairness_NeurIPS2022.pdf (513.71 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03827777 , version 1 (24-10-2022)


  • HAL Id : hal-03827777 , version 1


Mathieu Molina, Patrick Loiseau. Bounding and Approximating Intersectional Fairness through Marginal Fairness. NeurIPS 2022 - 36th Conference on Neural Information Processing Systems, Nov 2022, New Orleans, United States. pp.1-32. ⟨hal-03827777⟩
0 View
0 Download


Gmail Facebook Twitter LinkedIn More