Published February 25, 2026
| Version v1
Preprint
Open
Sleep-Wake Memory Convergence in Weight-Edited Language Models
Description
We present a sleep-wake architecture that injects facts directly into MLP weights using MEMIT during wake, then maintains them through sleep cycles of auditing, constrained refreshing, and pruning. On 8B and 70B models, we identify a sharp wake capacity threshold: the 8B model sustains 0.92 recall at 13 unconstrained edits, collapsing to 0.57 at 14 -- a tipping point caused by cascading edit interference. Sleep maintenance with null-space-constrained refreshes converges to 100% recall even from severe degradation: 30 facts at 40% recall recover fully within 4 sleep cycles. The 70B model converges 2x faster and absorbs a second injection wave with zero degradation, demonstrating that model scale provides more orthogonal weight dimensions for non-interfering edits. The ratio between wake capacity and sleep capacity defines optimal sleep frequency -- a 'drowsiness signal' analogous to biological sleep pressure. We characterize a failure mode: when pruning removes working edits faster than refresh can replace them, a death spiral drives recall from 97% to 46% over 10 cycles. Perplexity remains stable throughout convergence (+0.5% for 8B at 14 facts, 0% for 70B), confirming that constrained MEMIT maintenance is a near-free operation. This paper supersedes our prior work on LoRA-based consolidation, removing LoRA entirely: MEMIT is now the sole memory mechanism, and sleep performs maintenance rather than pathway transfer.
Notes
Files
5-Sleep-Wake-Memory-Convergence.pdf
Files
(113.4 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:b1fd2571f99e72414edc43287df1efb3
|
113.4 kB | Preview Download |
Additional details
Related works
- Continues
- Preprint: 10.5281/zenodo.18778766 (DOI)