Info: Zenodo’s user support line is staffed on regular business days between Dec 23 and Jan 5. Response times may be slightly longer than normal.

Published July 23, 2024 | Version v1
Conference paper Open

Practical AI Trustworthiness with Human Oversight

Description

We demonstrate SPATIAL, a proof-of-concept system that augments modern applications with capabilities to analyze trustworthy properties of AI models. The practical analysis of trustworthy properties is key to guaranteeing the safety of users and overall society when interacting with AI-driven applications.

SPATIAL implements AI dashboards to introduce human-in-the-loop capabilities for the construction of AI models. SPATIAL allows different stakeholders to obtain quantifiable
insights that characterize the decision making process of AI. This information can then be used by the stakeholders to
comprehend possible issues that influence the performance of AI models, such that the issues can be resolved by human operators. Through rigorous benchmarks and experiments in a real-world industrial application, we demonstrate that SPATIAL can easily augment modern applications with metrics to gauge and monitor trustworthiness. However, this, in turn, increases the complexity of developing and maintaining the systems implementing AI.


Our work paves the way towards augmenting modern applications with trustworthy AI mechanisms and human oversight approache

Files

SPATIAL_Practical_AI_Trustworthiness_with_Human_Oversight.pdf

Files (640.7 kB)

Additional details

Funding

SPATIAL - Security and Privacy Accountable Technology Innovations, Algorithms, and machine Learning 101021808
European Commission