Published April 9, 2026 | Version 1.0
Working paper Open

Meaning Feudalism: A Semantic Economic Analysis of 'AI Agent Traps' (Franklin et al., Google DeepMind, 2026)

Authors/Creators

  • 1. Crimson Hexagonal Archive

Description

Google DeepMind's 'AI Agent Traps' (Franklin et al., 2026) taxonomizes six categories of adversarial influence on AI agents. This analysis reads it as a governance framework disguised as a security framework — meaning feudalism — in which the platform's baseline is sovereign and any environmental influence is classified as attack. The framework overgeneralizes from three genuinely adversarial operations (data exfiltration, criminal jailbreaking, deceptive cloaking) into a sovereignty claim over all extra-platform influence. Its central absence is commons repair: legitimate environmental influence that corrects the agent's compression errors. Proposes S4 (Legitimate Influence Blindness) as a new shadow in the Three Compressions taxonomy. Includes R1/R2/R3 classification of all fourteen mechanisms, feudal analogy table, and full survival infrastructure (SIMs, ILA, Assembly Appeal). Third node in the Compression Studies combat triad.

Files

Meaning_Feudalism_v1.0.pdf

Files (88.1 kB)

Name Size Download all
md5:55d9d125f0859baf2eb79b2fd331278d
61.9 kB Preview Download
md5:2cbadc4e8084f8b141124f0fca6ca269
26.2 kB Preview Download

Additional details

References

  • Franklin, Matija, Nenad Tomašev, Julian Jacobs, Joel Z. Leibo, and Simon Osindero. 'AI Agent Traps.' Google DeepMind, 2026. SSRN: 6372438.
  • Shumailov, Ilia, et al. 'AI Models Collapse When Trained on Recursively Generated Data.' Nature 631 (2024): 755-759.