Published March 9, 2026 | Version v1.0
Preprint Open

Training Stability as an Admissibility Corridor in Machine Learning: A Structural Interpretation within the Paton System

Authors/Creators

Description

Machine learning training exhibits regimes of convergence, instability, and collapse. Models may converge toward a useful representation or diverge into numerical instability depending on the compatibility of recursive parameter updates with the constraints of the system.

This paper presents a structural interpretation of training stability using the Paton System framework. Within this interpretation, training occurs inside an admissibility corridor in parameter space. Recursive updates that remain compatible with governing constraints allow continuation of the learning process, while updates that exceed admissible limits lead to divergence or collapse.

The paper demonstrates how machine learning optimisation dynamics can be interpreted through admissibility conditions governing continuation. This example represents a Tier-7 domain instantiation within the Paton System, illustrating how the same structural continuation principles appear within computational systems.

The interpretation highlights a broader structural pattern: recursive systems persist only while updates remain compatible with governing constraints. When those constraints are violated, collapse occurs. This structure appears across multiple domains including physical systems, ecological systems, financial systems, and engineered control systems.

Files

paton_system_ml_training_full_paper.pdf

Files (202.6 kB)

Name Size Download all
md5:84ab2071c52dc0e1d4eb443a1e2f3f92
202.6 kB Preview Download