Published July 30, 2024 | Version v4
Report Open

Automation Bias and Procedural Fairness

  • 1. ROR icon University of Edinburgh

Description

The use of advanced artificial intelligence (AI) and data-driven automation in the public sector poses several organisational, practical, and ethical challenges. One that is easy to underestimate is automation bias, which, in turn, has underappreciated legal consequences. Automation bias is an attitude in which the operator of an autonomous system will defer to its outputs to the point where they overlook or ignore evidence that the system is failing. The legal problem arises when statutory office-holders (or their employees) either fetter their discretion to in-house algorithms or improperly delegate their discretion to third-party software developers – something automation bias may facilitate. A synthesis of previous research suggests an easy way to mitigate the risks of automation bias and its potential legal ramifications is for those responsible for procurement decisions to adhere to a simple checklist that ensures that the pitfalls of automation are avoided as much as possible. 

Files

Zerilli et al_Automation Bias and Procedural Fairness.pdf

Files (1.1 MB)

Additional details

Additional titles

Subtitle (English)
A short guide for the public sector

Funding

Arts and Humanities Research Council
Bridging Responsible AI Divides AH/X007146/1