Published February 5, 2026 | Version v1
Journal article Open

Kullback-Leibler divergence as evidence for semantic relativity: Inter-model meaning variance in AI-generated content

  • 1. Independent researcher

Description

Semantic Relativity Theory posits that meaning exhibits observer-dependent properties in AI-mediated communication, analogous to physical relativity. This paper tests a core prediction: different AI models should produce systematically divergent semantic interpretations of identical texts, quantifiable through information-theoretic metrics. We analyzed 89 texts evaluated by three large language models (Grok, Gemini, GPT-4o) using the CHORDS++ multidimensional framework, calculating Kullback-Leibler divergence between dimensional probability distributions. Results confirm systematic inter-model divergence: KL(Grok||Gemini)=0.001, KL(Grok||GPT)=0.002, KL(Gemini||GPT)=0.002. All pairwise divergences exceeded zero, rejecting the hypothesis of universal meaning. Furthermore, 20-32% of texts exhibited semantic collapse (inter-model gravity G<0.3), indicating profound interpretative incompatibility. These findings validate the relativistic premise that meaning lacks absolute existence, instead emerging as observer-dependent field phenomenon. KL divergence provides quantifiable metric for semantic relativity, enabling predictive framework for content stability across heterogeneous AI systems. This work establishes empirical foundation for relativistic approaches to machine-mediated meaning, with implications for content certification, cross-platform consistency, and AI alignment research.

Files

DOI_KL_Divergence_EN.pdf

Files (244.8 kB)

Name Size Download all
md5:1ba77b833cf812b495dff2013f5ddc24
244.8 kB Preview Download

Additional details

Related works

Is derived from
Preprint: 10.5281/ZENODO.18215792 (DOI)
Preprint: 10.5281/ZENODO.17873480 (DOI)
Preprint: 10.5281/ZENODO.17611607 (DOI)
References
Other: 10.5281/ZENODO.18078430 (DOI)
Other: 10.5281/ZENODO.18079116 (DOI)