Published March 8, 2026 | Version v1
Preprint Open

Pragmatics: Delivering Expert Judgment to AI Systems

Authors/Creators

Description

Large language models arrive well-informed about federal statistics but cannot reliably assess the fitness for use of the data they retrieve. This paper introduces pragmatics, structured expert judgment delivered at the point of statistical reasoning, and provides empirical evidence that the approach works. In a knowledge representation study, an LLM was tested across 39 Census data queries under three conditions: no methodology support (control), standard RAG using 311 document chunks, and pragmatics using 36 curated items. Both treatments drew from the same 354 pages of source documentation; only the method of representation differed.

Pragmatics produced very large improvements in consultation quality relative to control (Cohen's d = 1.440) and RAG (d = 0.922), with the strongest effects on uncertainty communication (d = 1.353). Pipeline fidelity reached 91.2%, up from 74.6% for RAG. All 39 queries received identical methodology context through deterministic lookup rather than similarity ranking. Pragmatics delivered 2.2 times the quality improvement per dollar spent compared to RAG. The architecture is domain-agnostic; the content is domain-specific. The concept, architecture, and evaluation framework generalize to any specialized domain where AI systems require expert judgment at the point of decision.

Files

Webb_2026_Pragmatics_Delivering_Expert_Judgment_to_AI_Systems.pdf

Files (11.2 MB)