The Confidence Game: Designing Trustworthy Human-AI Collaborations
Description
This presentation builds on the Human-AI Governance (HAIG), a novel framework for governing AI systems as they become increasingly agent-like. As AI capabilities advance from rule-based tools to autonomous agents, traditional governance mechanisms designed for human oversight prove inadequate.
The presentation addresses three critical governance paradoxes: the Accountability-Capability Paradox (humans responsible for outcomes they cannot understand), the Recursion Trap (AI systems governing other AI systems), and the Democratic Deficit (AI systems making consequential decisions without democratic input).
HAIG provides a dynamic governance framework with three core components:
- HAIG Continuum: Maps the evolution from human-dominant to AI-dominant systems across four phases
- Relational Dimensions: Tracks shifting Authority, Autonomy, and Accountability relationships
- Trust Thresholds: Identifies critical transition points where trust requirements fundamentally change
Using ChatGPT as a case study, the presentation demonstrates how billions of human-AI interactions currently operate without appropriate oversight, highlighting urgent governance gaps. The framework proposes institutional innovations including AI Audit Courts, Hybrid Oversight Bodies, and Algorithmic Juries to address these challenges.
Presented at Digital Camp 2025, this work contributes to the emerging field of AI governance by providing practical tools for managing human-AI collaboration while preserving democratic legitimacy.
Files
2025_07_03_ Camp Digital_PDF_Short_Edited.pdf
Files
(2.2 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:9248af7441d57da125f50f283963680a
|
2.2 MB | Preview Download |
Additional details
Related works
- Is supplement to
- Preprint: 10.5281/zenodo.15744943 (Other)
- Preprint: arXiv:2505.11579 (arXiv)
- Preprint: arXiv:2505.01651 (arXiv)
Dates
- Created
-
2025-07-03Presentation Slides