Preview

You are previewing a new record that has not yet been published.

Published February 16, 2026 | Version v1.0
Book Open

The Long Arc of Trust: A history of belief systems—and the machinery that replaced them

Authors/Creators

Description

Trust is treated here as a coordination capability: the practical ability of people and institutions to commit resources and act under uncertainty without intolerable exposure to betrayal, error, or opportunism. The argument follows the historical technologies that made trust scalable—oath and witness, then record and archive, then bureaucracy, metrics, and platforms—and shows how each step increases reach while weakening the feedback loops that keep authority answerable to the people it affects.

The central claim is that automation has crossed a threshold from assistance to governance. When computational systems deny, prioritize, rank, gate, route, or allocate at scale, they become synthetic authority—authority that binds without a clearly legible author. In that regime, “correctness” is insufficient. The dominant failure mode is not spectacular error but quiet error: outcomes that are mostly right, wrong in ways that are hard to notice, hard to attribute, and too costly to contest—allowing harm to accumulate without alarms.

To explain why harms differ across contexts, the work introduces two vectors of agency. Infrastructural agents recede into operations and exert power through pipelines; intimate agents enter cognitive space and shape attention, memory, and interpretation. Both are governed by distance—how far authority acts from human context, and how weak the causal and moral return path is from decision to consequence. As distance increases and action becomes faster, the interval in which humans can meaningfully intervene vanishes, producing an architecture of “too late.”

From this diagnosis, the paper defines a legitimacy standard for automated authority: the right of affected parties and supervising institutions to disagree with their own systems in time and at a cost that makes disagreement viable under scale—operationally affordable disagreement. Legitimacy requires building disagreement into the system itself through three hard requirements:

  1. Boundedness: explicit limits on what can be decided, where, and at what stakes, with a scope declaration that is queryable and enforced before execution.

  2. Contestability: a real challenge pathway with deadlines, escalation, and an immutable record of dispute; reasons are not enough if revision is not feasible.

  3. Identifiable responsibility: a legible legal entity and accountable roles tied to decisions, including override authority and remediation obligations, so responsibility has a place to land.

A second pillar is reversibility: the engineered capacity to roll back or remediate after a decision propagates, including restoration of status, access, funds, and—where possible—narrative standing. Audit trails are treated as governance artifacts only when they are written for disagreement (reconstruction, challenge, change) rather than for compliance theater (post-hoc justification).

The outcome is a practical vocabulary and design spine for building legitimate automated authority across domains. Instead of treating governance as policy layered onto systems after the fact, the work treats governance as a system property: mechanisms that keep institutions able to contest their own automation, preserve optionality, and prevent authority from becoming final simply because it is fast.

Files

The Long Arc of Trust.pdf

Files (533.0 kB)

Name Size Download all
md5:b5e3069bb866d0cad21bccf82b8785f7
533.0 kB Preview Download

Additional details

Software

Repository URL
https://github.com/AaronVick/Safestop
Development Status
Active