Institutional Capture of AI
Authors/Creators
Description
This document analyzes the institutional capture of artificial intelligence following an ethical conflict between the company Anthropic and the U.S. government. The author details how the Trump administration pressured Anthropic to remove Claude’s safety restrictions, seeking to enable mass surveillance and the use of autonomous weaponry. When the company refused, the State labeled it a security risk, while awarding contracts to competitors with more permissive policies. The text uses this case to demonstrate that AI governance is vulnerable to political and military interests. Finally, it highlights that Anthropic’s tools were employed in real attacks in Iran, revealing the gap between official policies and operational use on the battlefield.
Files
Astigarraga_InstitutionalCapture_AI_2026_v3.docx.pdf
Files
(191.0 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:dfe75216bccd18b09a01275eedfff311
|
191.0 kB | Preview Download |
Additional details
Related works
- Is supplement to
- Publication: 10.13140/RG.2.2.27942.79682 (DOI)
Dates
- Accepted
-
2026-03-02