Published March 6, 2026
| Version v2
Preprint
Open
Consensus-Target Linear Shrinkage: The Geometry of In-Context Learning in the Gradient-Gram Space
Authors/Creators
Description
In-context learning (ICL) performs consensus-target linear shrinkage on the
task-gradient Gram matrix: G^P = α_P·J + (1-α_P)·G^r. This implies an
Orthogonal Contraction Law and an Effective-Rank Entropy Bound. A 65-point
sweep confirms α_P as the master variable (per-task R² > 0.98). Cross-model
validation on 8 architectures (117M-2.7B) shows prediction error below 3%.
Combined with the ECC decomposition, this yields the Adaptation Cone Law
constraining intelligence-improving adaptations.
Files
paper_d.pdf
Files
(550.2 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:6ca1fa23eb43c435439aeaa93515c7bb
|
507.5 kB | Preview Download |
|
md5:ade9e085f629f4ac405414a65d868d6d
|
42.7 kB | Download |
Additional details
Related works
- Continues
- 10.5281/zenodo.18842724 (DOI)
- 10.5281/zenodo.18865925 (DOI)
- Is supplement to
- https://hyeongrok91-collab.github.io/ (URL)