Practical AI Trustworthiness with Human Oversight
Creators
Contributors
- 1. Montimage (France)
Description
We demonstrate SPATIAL, a proof-of-concept system that augments modern applications with capabilities to analyze trustworthy properties of AI models. The practical analysis of trustworthy properties is key to guaranteeing the safety of users and overall society when interacting with AI-driven applications.
SPATIAL implements AI dashboards to introduce human-in-the-loop capabilities for the construction of AI models. SPATIAL allows different stakeholders to obtain quantifiable
insights that characterize the decision making process of AI. This information can then be used by the stakeholders to
comprehend possible issues that influence the performance of AI models, such that the issues can be resolved by human operators. Through rigorous benchmarks and experiments in a real-world industrial application, we demonstrate that SPATIAL can easily augment modern applications with metrics to gauge and monitor trustworthiness. However, this, in turn, increases the complexity of developing and maintaining the systems implementing AI.
Our work paves the way towards augmenting modern applications with trustworthy AI mechanisms and human oversight approache
Files
SPATIAL_Practical_AI_Trustworthiness_with_Human_Oversight.pdf
Files
(640.7 kB)
Name | Size | Download all |
---|---|---|
md5:074ac7e7c2d21768af4f029735e71e81
|
640.7 kB | Preview Download |
Additional details
Funding
- SPATIAL - Security and Privacy Accountable Technology Innovations, Algorithms, and machine Learning 101021808
- European Commission