Info: Zenodo’s user support line is staffed on regular business days between Dec 23 and Jan 5. Response times may be slightly longer than normal.

Published July 30, 2024 | Version v3
Report Open

Explainability in Expert Contexts

  • 1. ROR icon University of Edinburgh

Description

The public sector increasingly relies on artificial intelligence (AI) to inform decision making across various domains, including policing, healthcare, social work, and immigration services. AI decision support systems (DSSs) can process large amounts of data (1) and generate outputs, such as predictions of medical diagnoses (2) or potential outcomes of a visa application (3). AI support could make processes within the public sector not only more efficient but also fairer by reducing the potential for human biases (4, 5).

However, AI-driven systems lack contextual sensitivity and cannot account for unique cases. They can also be trained on biased or incomplete data. Given that most of the decisions are highly sensitive, it is crucial that domain experts (e.g. social workers) maintain agency when making AI-supported decisions. Ideally, AI would automate mundane, repetitive tasks and allow experts to focus on higher-level and creative ones (6). Unfortunately, domain experts often cannot understand and evaluate whether they should trust AI systems and their generated outputs (7).

This report provides a broad overview of challenges faced when DSSs inform decisionmaking. It explores critical blockages for effective expert–AI collaborations and discusses potential solutions. It also considers the role of explainability in supporting experts and outlines recommendations for how explanations could be made more effective and usable in each expert context.

Files

Simkute_Explainability in Expert Contexts.pdf

Files (834.3 kB)

Name Size Download all
md5:0e50769db7b5d68cb07ff364c5773645
834.3 kB Preview Download

Additional details

Additional titles

Subtitle
Challenges and Limitations in Supporting Domain Experts in AI-driven Decision-making

Funding

Bridging Responsible AI Divides AH/X007146/1
Arts and Humanities Research Council