Seeing Like a System: High Modernism, Legibility, and the Limits of Intelligent Robotics
Description
Working Thesis
Contemporary AI and intelligent robotics risk reproducing the epistemic structure of high modernism by privileging legible, optimisable representations over situated, context-sensitive intelligence.
Although advances in embodied and adaptive systems suggest a technical pathway beyond abstraction, institutional requirements of auditability, predictability, and scale systematically favour simplified operational models.
As a result, genuinely context-responsive intelligence – human or artificial – enters structural tension with the governance conditions of late-modern technological systems.
Abstract
This working paper extends theories of administrative legibility and high-modernist rationality into the domain of contemporary artificial intelligence, intelligent robotics, and algorithmic management. Building on the analysis of state simplification in Seeing Like a State, the study argues that late-modern technological systems inherit and intensify the epistemic commitments of high modernism by prioritising representations that are measurable, auditable, and scalable over forms of knowledge grounded in situated practice.
While strands of robotics and embodied AI research demonstrate the feasibility of context-sensitive, adaptive intelligence, the institutional environments in which such systems are deployed impose constraints that favour predictability and formal legibility. This produces a structural paradox: the technical conditions for more genuinely intelligent systems increasingly exist, yet the governance frameworks surrounding their deployment incentivise epistemic simplification analogous to earlier high-modernist planning regimes.
The paper situates this tension within management theory, science and technology studies, and philosophy of technology, suggesting that intelligent systems may ultimately confront the same limits of abstraction previously observed in state administration and organisational planning. Rather than treating AI failure primarily as a technical deficit, the analysis reframes it as a consequence of legibility requirements embedded in large-scale coordination.
This document serves as a conceptual foundation for a longer investigation into legibility, affect, and intelligent governance in late modernity.
Files
Seeing Like a System.pdf
Files
(161.2 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:af81efa2962b271820ed0a182e687751
|
161.2 kB | Preview Download |
Additional details
Related works
- References
- Working paper: 10.5281/zenodo.18501335 (DOI)
Dates
- Created
-
2025-02-05Initial public release of conceptual working paper