The Second Scaling Law: Intelligence Scales with Parameters, Wisdom Scales with Architecture
Authors/Creators
Description
Neural scaling laws (Kaplan et al., 2020; Hoffmann et al., 2022) established that model intelligence—measured by loss on held-out data—scales predictably with parameter count and training data. We demonstrate that a second, orthogonal scaling law governs wisdom: the capacity to learn from sequential experience and transfer insights to novel situations.
Through formal analysis and empirical validation on 3,600 agent–task–round interactions across three frontier LLMs (DeepSeek-v4-flash, Qwen-Plus, Qwen-Max), we prove that intelligence and wisdom are asymptotically decorrelated (Theorem 1), that wisdom is dominated by architectural properties rather than parameter count (Theorem 2), and we derive a functional form for the Wisdom Scaling Law governed by three measurable architectural quantities: plasticity, immunity, and homeostasis (Theorem 3).
Our framework unifies recent advances in liquid neural networks, cognitive immunity, and Bayesian self-regulation under a single theoretical umbrella, and yields falsifiable predictions distinguishing it from the established parameter-scaling paradigm. We contend that the first scaling law charts the ceiling of artificial intelligence; the second charts the floor of artificial wisdom.
Key results:
- Spearman ρ(I, W) = -0.389 (n=12): higher Intelligence correlates with lower Wisdom
- Architecture explains more variance in Wisdom than model scale (ANOVA)
- Power-law fit: W = C · Φ^0.42 · Ψ^0.31 · H^0.22 (R² = 0.89)
- Three falsifiable conjectures: Parameter Irrelevance, Exponent Universality, Plasticity Phase Transition
Files
The_Second_Scaling_Law_Intelligence_Scales_with_Parameters_Wisdom_with_Architecture.pdf
Files
(284.5 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:db10a69a8027fbdd9b9b759559c22e0b
|
284.5 kB | Preview Download |