There is a newer version of the record available.

Published December 31, 2025 | Version v1
Other Open

Empirical Evidence Of Interpretation Drift In Large Language Models

Description

This release contains two companion documents examining interpretation drift in large language models.

The first paper establishes the empirical existence of interpretation drift, demonstrating that identical or near-identical inputs can yield meaningfully different interpretations across models, time, or context—even under deterministic decoding. Its focus is observational and measurement-oriented: determining whether interpretive variance occurs and how it manifests in practice. An artifact providing empirical grounding for interpretation drift framework is introduced in: Empirical Evidence Of Interpretation Drift In Arc-Style Rasoning [https://zenodo.org/records/18420425]

The second document is a companion field guide that organizes those observations into a unified, descriptive taxonomy and a growing library of diagnosed cases. It does not attempt to explain or resolve drift; instead, it provides a structured vocabulary and diagnostic framework for recognizing recurring patterns of interpretive instability across domains such as code generation, go-to-market strategy, M&A analysis, and classification tasks.

Together, these documents separate observation from organization. They are intended to support researchers and practitioners in reasoning clearly about interpretive variance in real-world systems, while explicitly reserving authority, judgment, and decision-making for human actors.

Files

NguyenE2025_Companion_to_EmpiricalEvidenceOfInterpretationDrift.pdf

Additional details

Additional titles

Alternative title
Foundational Substrate Hypothesis: A Unified Account of Stochastic Reasoning

Related works

Is supplemented by
Other: 10.5281/zenodo.18420425 (DOI)

References

  • J. Gama, I. Žliobaitė, A. Bifet, M. Pechenizkiy, and A. Bouchachia, "A survey on concept drift adaptation," ACM Computing Surveys, vol. 46, no. 4, Art. no. 44, 2014, doi: 10.1145/2523813
  • J. G. Moreno-Torres, T. Raeder, R. Alaiz-Rodríguez, N. V. Chawla, and F. Herrera, "A unifying view on dataset shift in classification," Pattern Recognition, vol. 45, no. 1, pp. 521–530, 2012, doi: 10.1016/j.patcog.2011.06.019
  • L. Kuhn, Y. Gal, and S. Farquhar, "Semantic uncertainty: Linguistic invariances for uncertainty estimation in natural language generation," in Proc. Int. Conf. Learn. Represent. (ICLR), 2023. [Online]. Available: https://openreview.net/forum?id=VD-AYtP0dve
  • P. Liang et al., "Holistic evaluation of language models," Trans. Mach. Learn. Res. (TMLR), 2023. [Online]. Available: https://arxiv.org/abs/2211.09110
  • A. Srivastava et al., "Beyond the imitation game: Quantifying and extrapolating the capabilities of language models," arXiv preprint arXiv:2206.04615, 2022
  • L. Huang et al., "A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions," arXiv preprint arXiv:2311.05232, 2023