Published March 1, 2026 | Version v1
Lesson Open

AI AND ALGORITHMIC BIAS

  • 1. Universitat de Barcelona

Description

This Open Educational Resource (OER) examines AI bias and algorithmic bias as forms of systematic discrimination embedded in artificial intelligence systems and machine learning models. It explains how biased training data, model design choices, feedback loops, and socio-technical factors can produce unfair or discriminatory outcomes, reinforcing existing gender, racial, and socioeconomic inequalities.

The resource presents concrete examples of AI and algorithmic bias, including racial bias in facial recognition systems, gender bias in recruitment algorithms, bias in AI-assisted medical diagnosis, and discriminatory outcomes in predictive policing tools. These cases illustrate how AI systems can reflect and amplify structural inequalities rather than operate as neutral technologies.

The OER also proposes strategies to mitigate or prevent bias, such as using diverse and representative datasets, conducting regular audits, ensuring transparency and explainability (XAI), adopting inclusive design practices, and establishing regulatory frameworks. Particular attention is given to the role of Library and Information Science (LIS) professionals in fostering AI literacy, algorithmic literacy, and data literacy, as well as in advocating for fair, transparent, and accountable AI systems.

Developed within the framework of the European project GEDIS (Gender Diversity in Information Science), this OER promotes critical awareness of AI systems and supports inclusive and socially responsible technological development in higher education.

Files

Milijana-OER-JAN-ENG-2026.pdf

Files (754.7 kB)

Name Size Download all
md5:d6322066bd244dc63dfd35354a80277b
640.4 kB Preview Download
md5:06b78d1e1ee81595b15415e750dad641
114.2 kB Preview Download

Additional details

Related works

Is derived from
Lesson: 10.5281/zenodo.18074950 (DOI)

References