Published February 28, 2026 | Version v1
Journal Open

EXPLAINABILITY OVER ACCURACY: A HUMAN-CENTERED STUDY OF TRUST IN ARTIFICIAL INTELLIGENCE

Authors/Creators

Description

As artificial intelligence becomes part of everyday decision-making, trust in these systems is no longer optional - it is essential. While most AI research focuses on improving accuracy, people often interact with systems that provide little to no explanation for their decisions. This study explores a simple but important question: do people trust AI systems that explain their decisions more than those that are highly accurate but opaque?

To examine this, we compare two simulated AI models. One model delivers highly accurate decisions without explanation, while the other provides clear, understandable explanations with slightly lower accuracy. Participants are presented with AI-generated decisions in a controlled scenario and are asked to evaluate their level of trust, perceived fairness, confidence, and willingness to rely on each system.

The results indicate that transparency plays a significant role in shaping user trust. Participants generally show a stronger preference for AI systems that offer explanations, even when they are informed that these systems may be marginally less accurate. Explanations help users feel more confident, involved, and assured that decisions are being made fairly.

These findings suggest that accuracy alone is not sufficient for building trustworthy AI. Instead, explainability should be treated as a core design principle, especially in applications where human judgment, accountability, and ethical concerns are critical.

Files

18.Kaushal Karthikeyan Nadar.pdf

Files (465.1 kB)

Name Size Download all
md5:2c1d54e0ecb305dec7a463fce3356ae8
465.1 kB Preview Download