Published March 18, 2026 | Version 0.1.0
Preprint Open

Understanding Provenance: A User Study on Explainability in Probabilistic Multi-Evidence Reasoning Systems

  • 1. Epalea

Description

Background: Machine learning systems deployed in high-stakes domains often lack transparency in their reasoning processes, creating barriers to user trust and appropriate reliance. While explainable AI (XAI) methods provide feature importance, they fail to expose the complete reasoning chain from evidence to prediction.

Methods: We conducted a user study (N = 25) with domain experts evaluating AI predictions for tax compliance risk assessment. Participants assessed 5 carefully selected cases while viewing detailed provenance explanations that included evidence chains, contribution weights, uncertainty distributions, and source credibility scores.

Results: Participants demonstrated moderate to strong understanding of provenance-based explanations (M = 3.34 ± 0.92 on a 5-point scale), with corresponding trust levels (M = 3.38 ± 0.92). Analysis revealed substantial variation across case types: high-confidence correct predictions achieved 80% acceptance, while borderline and mixed-evidence cases showed more cautious evaluation (72–80% acceptance). Understanding and trust showed positive correlation (r = 0.394), suggesting that comprehension of reasoning processes influences confidence in AI predictions.

Conclusions: Provenance ledgers enable domain experts to critically evaluate AI reasoning by exposing evidence chains, weights, and uncertainty. The variation in acceptance rates across cases demonstrates appropriate reliance—participants were more cautious with low-confidence and mixed-evidence predictions. This supports the value of transparent reasoning traces for human-AI collaboration in high-stakes decision-making.

Keywords:
Explainable AI (XAI), provenance tracking, AI transparency, human-AI collaboration, trust in AI, appropriate reliance, multi-evidence reasoning, interpretable machine learning, decision support systems, uncertainty visualization, evidence-based AI, user studies, high-stakes AI, tax compliance systems, reasoning traceability

Files

main.pdf

Files (1.2 MB)

Name Size Download all
md5:57ca535f301d012fdfa68a5416005943
1.2 MB Preview Download

Additional details

Related works

Is supplemented by
Preprint: 10.5281/zenodo.19183861 (DOI)
Preprint: 10.5281/zenodo.19184458 (DOI)
Preprint: arXiv:2603.15670 (arXiv)
Preprint: arXiv:2603.15674 (arXiv)

Software

Repository URL
https://github.com/aaaEpalea/epalea.git
Programming language
Python
Development Status
Active