THE "ACCOUNTABILITY GAP" IN AUTONOMOUS SYSTEMS: RECONCILING LEGAL PERSONHOOD WITH ALGORITHMIC OPACITY
Authors/Creators
Description
As Artificial Intelligence (AI) transitions from narrow task-optimization to autonomous decision-making in critical infrastructures—notably healthcare, finance, and law—it outpaces existing regulatory frameworks. This paper addresses the burgeoning "accountability gap," a socio-technical phenomenon in which the opacity of deep learning architectures (the "black-box" problem) complicates traditional legal doctrines of negligence and liability. While current discourse emphasizes bias and privacy, this study focuses on the intersection of algorithmic opacity and legal personhood. By synthesizing current jurisprudence with the technical constraints of eXplainable AI (XAI), the research evaluates whether granting AI "electronic personhood" is a viable solution or an ethical scapegoat for developers. Through a multidisciplinary analysis of case law and algorithmic auditing, we propose a "Hybrid Liability Framework" that balances innovation with human-centric safety. Our findings suggest that addressing the accountability gap requires a shift from retroactive litigation to proactive algorithmic governance, ensuring that AI autonomy remains tethered to human institutional responsibility.
Files
ZDIF 3133.pdf
Files
(557.9 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:e1152e067a965be7ee4b6e8fba2b196b
|
557.9 kB | Preview Download |