Published March 7, 2026 | Version 1.0
Preprint Open

Control Without Code: Linguistic Governance in Long-Horizon Human–AI Collaboration

  • 1. Independent Researcher

Description

This paper examines how conversational structure can function as a governance mechanism in sustained human–AI collaboration. While most work on AI reliability focuses on model architecture, alignment techniques, or prompt design, this study investigates a different layer: the linguistic control structures that emerge during long-horizon interaction.

Using a longitudinal case study of a governed human–AI collaboration, the paper analyses a 25-event observational corpus and identifies three categories of linguistic control primitives: scope drift signals, repair protocols, and behavioural anchors. A measurement framework is used to examine failure timing, repair latency, and continuity preservation within a locked analytical subset of the dataset.

The findings suggest that linguistic drift can act as an early warning signal of instability, that explicit repair language can support relatively rapid recovery, and that behavioural anchors can help preserve epistemic alignment and collaboration continuity. Taken together, these observations support the concept of linguistic governance: the idea that language itself can function as a micro-governance interface within human–AI systems.

The paper contributes to research on AI governance, human–AI interaction, and long-horizon collaboration by proposing that stability is shaped not only by model capability, but also by the design of conversational control mechanisms embedded within dialogue.

Files

Paper5_Control_Without_Code_v1.1_FINAL.pdf

Files (394.2 kB)

Name Size Download all
md5:b99c7e67c501f95a091262488715aad0
394.2 kB Preview Download