Published December 7, 2025 | Version v2
Preprint Open

Predicting Neural Scaling Laws from Data Geometry: Constraint Signatures Without the Human

Authors/Creators

Description

Neural scaling laws, describing how loss decreases with data (L ~ D^{-beta_D}), are typically discovered through expensive empirical sweeps. We propose that the data scaling exponent can be predicted from dataset geometry via intrinsic dimension (ID).

Our key insight: from statistical learning theory, beta_D ~ s/d where d is intrinsic dimension and s is smoothness. We calibrate s ~ 4.5 on text, then predict on three held-out modalities without re-calibration. For unstructured text, predictions are accurate (scientific: 6% error). For structured data, predictions remain within 25% (code: 18%, tabular: 24%), consistent with empirical variance in scaling law estimates, and reveal lower smoothness (s ~ 3.6-3.8), a diagnostic rather than a failure.

We demonstrate falsifiability: noise injection increases ID and decreases beta_D monotonically. Rank ordering (code > tabular > text > scientific) is preserved across encoders.

Practical value: A 10-minute geometric probe can predict dataset scaling behavior before committing to expensive training runs.

Files

Predicting_Neural_Scaling_Laws_from_Data_Geometry.pdf

Files (223.6 kB)