Published March 9, 2026 | Version v1.0
Preprint Open

Neural Network Collapse Modes as Admissibility Failures: A Structural Interpretation within the Paton System

Description

Neural network training frequently exhibits instability and collapse phenomena. Common examples include exploding gradients, vanishing gradients, mode collapse in generative models, unstable loss oscillations, and representation collapse. These behaviours are typically treated as separate optimisation problems arising from algorithm design or numerical instability.

This paper presents a structural interpretation of neural network collapse modes using the Paton System framework. Within this interpretation, collapse phenomena are understood as manifestations of inadmissible recursive updates within parameter space. When training updates remain within an admissible corridor of constraint compatibility, learning converges and stable representations emerge. When recursive updates exceed admissible limits, collapse modes appear.

This interpretation unifies several well-known neural network training failures as structural inadmissibility events within recursive learning systems. The analysis demonstrates how continuation and collapse in neural network training can be understood through admissibility conditions governing recursive systems. Within the Paton System architecture, neural network collapse modes represent a Tier-7 domain instantiation within computational systems.

Files

paton_neural_network_collapse_modes_full_paper.pdf

Files (177.1 kB)