Published December 29, 2022
| Version v1
Poster
Open
Solving The Data Gap Paradox In Machine Learning using PETs
Creators
- 1. university of Luxembourg
- 2. Sant'Anna School of Advanced Studies
Description
Underrepresentation of marginalized persons in datasets leads to biases in models, which in turn cause decisions made about marginalized people to be biased. On the other hand, marginalized persons experience disproportionate harms from privacy violations and so need protection from data collection which creates ' a data gap paradox'. In this poster, we discuss this paradox and we provide a draft of a solution using privacy enhancing technologies.
Files
Poster_Toulouse (3).pdf
Files
(629.9 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:f903704c6de9cafe285cfb55c172eff2
|
629.9 kB | Preview Download |
Additional details
References
- M. E. Gilman and R. Green. The surveillance gap: The harms of extreme privacy and data marginalization.
- N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan. A survey on bias and fairness in machine learning. Publisher: arXiv Version Number: 3.
- S. Sannon and A. Forte. Privacy research with marginalized groups: What we know, what's needed, and what's next. Publisher: arXiv Version Number: 1.
- A. Trask, E. Bluemke, B. Garfinkel, C. G. Cuervas-Mons, and A. Dafoe. Beyond privacy trade-offs with structured transparency. arXiv preprint arXiv:2012.08347, 2020.
- J. Whittlestone, R. Nyrup, A. Alexandrova, and S. Cave. The role and limits of principles in AI ethics: Towards a focus on tensions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 195–200. ACM.