There is a newer version of the record available.

Published July 30, 2024 | Version v1
Report Restricted

Explainability in Expert Contents

  • 1. ROR icon University of Edinburgh

Description

The public sector increasingly relies on artificial intelligence (AI) to inform decision making across various domains, including policing, healthcare, social work, and immigration services. AI decision support systems (DSSs) can process large amounts of data (1) and generate outputs, such as predictions of medical diagnoses (2) or potential outcomes of a visa application (3). AI support could make processes within the public sector not only more efficient but also fairer by reducing the potential for human biases (4, 5).

However, AI-driven systems lack contextual sensitivity and cannot account for unique cases. They can also be trained on biased or incomplete data. Given that most of the decisions are highly sensitive, it is crucial that domain experts (e.g. social workers) maintain agency when making AI-supported decisions. Ideally, AI would automate mundane, repetitive tasks and allow experts to focus on higher-level and creative ones (6). Unfortunately, domain experts often cannot understand and evaluate whether they should trust AI systems and their generated outputs (7).

This report provides a broad overview of challenges faced when DSSs inform decisionmaking. It explores critical blockages for effective expert–AI collaborations and discusses potential solutions. It also considers the role of explainability in supporting experts and outlines recommendations for how explanations could be made more effective and usable in each expert context.

Files

Restricted

The record is publicly accessible, but files are restricted to users with access.

Additional details

Additional titles

Subtitle
Challenges and Limitations in Supporting Domain Experts in AI-driven Decision-making

Funding

Arts and Humanities Research Council
Bridging Responsible AI Divides AH/X007146/1