Published January 16, 2026 | Version V.03 Falsifiability_Testing
Preprint Open

A Conservation Law for Commitment in Language Under Transformative Compression and Recursive Application

Description

This repository introduces a conservation law for commitment in language under transformative compression and recursive application. We formalize commitment as an information-bearing invariant that must be preserved across paraphrase, summarization, and iterative reuse, even as surface form and representation change.

We propose a falsifiability framework that operationalizes this invariant using compression-based stress tests and lineage-aware evaluation, distinguishing semantic preservation from mere token retention. The framework is designed to be model-agnostic and applicable to both human and machine-generated language.

This disclosure presents the theoretical law, evaluation criteria, and architectural relationships. Implementation mechanisms are outside the scope of this paper.

Timestamped public disclosure while awaiting arXiv endorsement.

Files

v.03.pdf

Files (593.7 kB)

Name Size Download all
md5:4d8bae3065da6298994429c880f0eb22
593.7 kB Preview Download

Additional details

Dates

Updated
2026-01-16
Falsifiability_Public

References

  • Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27(3), 379-423
  • Turing, A. M. (1936). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, s2-42(1), 230-265.
  • Schmidhuber, J. (2008). Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes.
  • Goertzel, B., et al. (2014). A cognitive architecture based on cognitive synergy.
  • Looks, M. (2006). Meta-optimizing semantic evolutionary search.
  • Looks, M. (2009). Scalable meta-optimization: A case study with the distributed hierarchical genetic algorithm.
  • Corrêa, C., Schmid, P., Goyal, K., Kim, J., et al. (2025). Iterative Deployment Improves Planning Skills in LLMs. arXiv preprint arXiv:2512.24940.
  • Xie, Z., Ma, Y., Zhou, Y., et al. (2025). mHC: Manifold-Constrained Hyper-Connections for Stable Scaling. arXiv preprint arXiv:2512.24880.
  • Chang, E. (2025). The Missing Layer of AGI: From Pattern Alchemy to Coordination Physics. arXiv preprint arXiv:2512.05765.
  • Zhang, H., Liu, A., et al. (2025). Recursive Language Models. arXiv preprint arXiv:2512.24601.
  • Guo, D., Yang, D., Zhang, H., et al. (2025). DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. arXiv preprint arXiv:2501.12948.
  • Chen, Z., Wang, H., Li, T., et al. (2026). SimpleMem: A Simple Memory Mechanism with Structured Compression for Long-Context Language Agents. arXiv preprint arXiv:2601.02553.
  • Park, J. S., O'Brien, J. C., Cai, C. J., et al. (2023). Generative Agents: Interactive Simulacra of Human Behavior. Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, 1–22.
  • Bai, Y., Kadavath, S., Kundu, S., et al. (2022). Constitutional AI: Harmlessness from AI Feedback. arXiv preprint arXiv:2212.08073.