Bias in data‐driven artificial intelligence systems—An introductory survey
Creators
- Eirini Ntoutsi1
- Pavlos Fafalios2
- Ujwal Gadiraju1
- Vasileios Iosifidis1
- Wolfgang Nejdl1
- Maria‐Esther Vidal1
- Salvatore Ruggieri3
- Franco Turini3
- Symeon Papadopoulos4
- Emmanouil Krasanakis4
- Ioannis Kompatsiaris4
- Katharina Kinder‐Kurlanda5
- Claudia Wagner5
- Fariba Karimi5
- Miriam Fernandez6
- Harith Alani6
- Bettina Berendt7
- Tina Kruegel1
- Christian Heinze1
- Klaus Broelemann8
- Gjergji Kasneci8
- Thanassis Tiropanis9
- Steffen Staab9
- 1. LUH
- 2. FORTH
- 3. UNIPI
- 4. CERTH
- 5. GESIS
- 6. OU
- 7. KUL
- 8. SCHUFA
- 9. SOTON
Description
Artificial Intelligence (AI)‐based systems are widely employed nowadays to make decisions that have far‐reaching impact on individuals and society. Their decisions might affect everyone, everywhere, and anytime, entailing concerns about potential human rights issues. Therefore, it is necessary to move beyond traditional AI algorithms optimized for predictive performance and embed ethical and legal principles in their design, training, and deployment to ensure social good while still benefiting from the huge potential of the AI technology. The goal of this survey is to provide a broad multidisciplinary overview of the area of bias in AI systems, focusing on technical challenges and solutions as well as to suggest new research directions towards approaches well‐grounded in a legal frame. In this survey, we focus on data‐driven AI, as a large part of AI is powered nowadays by (big) data and powerful machine learning algorithms. If otherwise not specified, we use the general term bias to describe problems related to the gathering or processing of data that might result in prejudiced decisions on the bases of demographic features such as race, sex, and so forth.
Notes
Files
nobias-introductory-survey-dkmd-2020-preprint.pdf
Files
(839.4 kB)
Name | Size | Download all |
---|---|---|
md5:458fc98f097f5696acd6ca791b343ec9
|
839.4 kB | Preview Download |