Published March 6, 2026 | Version 1.0
Preprint Open

Boundary as an Execution-Time Primitive for AI-Assisted Software Development Governance

  • 1. Independent Researcher

Description

Loss of control, behavioral drift, and non-auditability in AI-assisted software development are commonly attributed to model misalignment, hallucination, or insufficient guardrails. This paper argues that such diagnoses overlook a fundamental category distinction.

We distinguish between model alignment boundaries, established at training time through data distributions, RLHF, and safety fine-tuning, and task execution boundaries, which must be explicitly constructed at execution time for a specific engineering task. While the former provides general, statistical safety tendencies, it does not—and cannot—automatically inherit the concrete, task-specific constraints required for engineering governance.

We show that many widely reported failures, including insecure yet functional code generation, arise not from deficient model alignment but from the absence of a decidable task execution boundary at runtime. When such a boundary is missing, drift and violation become epistemically undecidable, and model preferences fill the resulting vacuum.

We formalize task execution boundaries as the resolution of visible scope and explicit prohibitive constraints, introduce boundary evidence as the minimal auditable unit, and demonstrate through engineering scenarios that governance mechanisms operating without this primitive rest on interpretive rather than decidable foundations.

Files

2026-02_boundary-as-an-execution-time-primitive.pdf

Files (228.5 kB)

Additional details

Related works

Is supplement to
Preprint: 10.31224/6583 (DOI)