Published November 4, 2025 | Version 2
Publication Open

Evans' Law: A Predictive Threshold for Long-Context Accuracy Collapse in Large Language Models

  • 1. Pattern Pulse AI

Description

Language models exhibit consistent performance decay as input and output lengths increase. This paper presents Evans' Law, defining the relationship between context length and accuracy degradation. Initial experimental validation confirms the phenomenon exists and provides empirical data to refine the mathematical formulation. Evans' Law: The likelihood of errors rises super-linearly with prompt and output length until accuracy falls below 50%, following a power-law relationship determined by model capacity and task complexity

Files

Evans Zenodo Submission.pdf

Files (93.9 kB)

Name Size Download all
md5:6a6ae8df226663c5dcb8990fa5966e5f
93.9 kB Preview Download

Additional details

Dates

Created
2025-11-03

References

  • Liu, N. F., et al. (2023). *Lost in the Middle: How Language Models Use Long Contexts.* Stanford CS. https://arxiv.org/abs/2307.03172 2. Zhang, Y., et al. (2025). *Context Length Alone Hurts LLM Performance Despite Perfect Retrieval.* arXiv:2510.05381 3. Veseli, B., et al. (2025). *Positional Biases Shift as Inputs Approach Context Window Limits.* arXiv:2508.07479 4. Chroma Research (2025). *Context Rot: How Increasing Input Tokens Impacts LLM Performance.* https://research.trychroma.com/context-rot (non-peer-reviewed industry report) 5. Evans, J. (2025). *Evans' Law: A Predictive Threshold for Long-Context Accuracy Collapse in Large Language Models.* [This paper]