Published November 7, 2025 | Version v2
Publication Open

Evans' Law: A Predictive Threshold for Long-Context Accuracy Collapse in Large Language Models

  • 1. Pattern Pulse AI

Description

 

Update – Version 2 (7 November 2025):

This record now includes the complete dataset, regression notebook, and detailed methods note for replication. The work remains preliminary and exploratory. All data and code are provided for transparency, and independent validation is encouraged.

 

 

 

 

Abstract

Language models exhibit consistent performance decay as input and output lengths increase. This paper presents Evans’ Law, defining the relationship between context length and accuracy degradation.

 

Evans’ Law: The likelihood of errors rises super-linearly with prompt and output length until accuracy falls below 50 percent, following a power-law relationship determined by model capacity and task complexity.

 

Initial experimental validation confirms that the phenomenon exists and provides empirical data to refine the mathematical formulation. The updated dataset and regression analysis extend this validation, showing a sub-linear scaling curve consistent across multiple large-language-model families.

 

Key materials in this version:

• Full dataset of coherence-loss threshold measurements (evanslaw_dataset.csv)

• Regression notebook (evanslaw_regression.ipynb)

• Regression analysis export (evanslaw_regression.html)

• Visualization of observed vs theoretical fits (evanslaw_plot.png)

 

 

All data were collected under deterministic prompting conditions (temperature 0.2, top-p 1.0). Methods and limitations are documented in the accompanying methods_note.pdf.

 

 

Files

evanslaw_analysis_v3_2025_11.ipynb

Files (916.7 kB)

Name Size Download all
md5:2bc6ed1af4bbdff39b41b1d74cb8164a
511.1 kB Download
md5:4560aa595a80a986dfc03cbf0afc6177
233.3 kB Preview Download
md5:4abfbbd2a1ee00f5cc1cfbeaeb5002f0
172.0 kB Preview Download
md5:9a6b445cca11cc7121684e391dcc1362
253 Bytes Preview Download

Additional details

Dates

Created
2025-11-03

References

  • Liu, N. F., et al. (2023). *Lost in the Middle: How Language Models Use Long Contexts.* Stanford CS. https://arxiv.org/abs/2307.03172 2. Zhang, Y., et al. (2025). *Context Length Alone Hurts LLM Performance Despite Perfect Retrieval.* arXiv:2510.05381 3. Veseli, B., et al. (2025). *Positional Biases Shift as Inputs Approach Context Window Limits.* arXiv:2508.07479 4. Chroma Research (2025). *Context Rot: How Increasing Input Tokens Impacts LLM Performance.* https://research.trychroma.com/context-rot (non-peer-reviewed industry report) 5. Evans, J. (2025). *Evans' Law: A Predictive Threshold for Long-Context Accuracy Collapse in Large Language Models.* [This paper]