On the Alignment of Group Fairness with Attribute Privacy - INSA Lyon - Institut National des Sciences Appliquées de Lyon
Communication Dans Un Congrès Année : 2024

On the Alignment of Group Fairness with Attribute Privacy

Résumé

Machine learning (ML) models have been adopted for applications with high-stakes decision-making like healthcare and criminal justice. To ensure trustworthy ML models, the new AI regulations (e.g., AI Act) have established several pillars such as privacy, safety and fairness that model design must take into account. Designing such models requires an understanding of the interactions between fairness definitions with different notions of privacy. Specifically, the interaction of group fairness (i.e., protection against discriminatory behaviour across demographic subgroups) with attribute privacy (i.e., resistance to attribute inference attacks-AIAs), has not been comprehensively studied. In this paper, we study in depth, both theoretically and empirically, the alignment of group fairness with attribute privacy in a blackbox setting. We first propose AdaptAIA, which outperforms existing AIAs on real-world datasets with class imbalances in sensitive attributes. We then show that group fairness theoretically bounds the success of AdaptAIA, which depends on the choice of fairness metrics (e.g., demographic parity or equalized odds). Through our empirical study, we show that attribute privacy can be achieved from group fairness at no additional cost other than the already existing trade-off with utility. Our work has several implications: i) group fairness acts as a defense against AIAs, which is currently lacking, ii) practitioners do not need to explicitly train models for both fairness and privacy to meet regulatory requirements, iii) Adap-tAIA can be used for blackbox auditing of group fairness.

Fichier principal
Vignette du fichier
FairPrivateML-35.pdf (675.32 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04740889 , version 1 (17-10-2024)

Licence

Identifiants

  • HAL Id : hal-04740889 , version 1

Citer

Jan Aalmoes, Vasisht Duddu, Antoine Boutet. On the Alignment of Group Fairness with Attribute Privacy. International Web Information Systems Engineering conference, Dec 2024, Doha, Qatar. ⟨hal-04740889⟩
1 Consultations
0 Téléchargements

Partager

More