Published March 15, 2026 | Version v1
Preprint Open

The Deformation Laws of Neural Identity

  • 1. Fall Risk Research

Description

Neural network identity is not monolithic. Different observables — hidden-state geometry, pre-softmax logit statistics, and behavioral output templates — sit at different depths in the forward computation and respond to perturbation on different timescales. This paper shows that three identity layers — structural, thermodynamic, and functional — each obey a distinct validated deformation law. The structural layer is model-specific, stable under non-destructive training interventions, and inert under same-family direct targeting in the observed regime. The thermodynamic layer is approximately universal across a validated 22-model Transformer cross-section. The functional layer is volatile, transferring through distillation and eroding under continued fine-tuning. We resolve the carrier of the structural layer as a two-channel geometric observable requiring both token-level magnitude and token-level direction, and we falsify two natural simplifications: that the structural fingerprint reduces to a gauge projection, and that it is predictable from coarse architecture features. Together these results define an admissibility condition for neural identity claims: such claims must specify which layer they address, because the layers do not share a deformation law.
 

The Neural Network Identity Series — Mathematical foundations, empirical validation, and governance frameworks for verifying which model is running

  1. Paper 1: The δ-Gene: Inference-Time Physical Unclonable Functions from Architecture-Invariant Output Geometry (DOI: 10.5281/zenodo.18704275)

  2. Paper 2: Template-Based Endpoint Verification via Logprob Order-Statistic Geometry (DOI: 10.5281/zenodo.18776711)

  3. Paper 3: The Geometry of Model Theft: Distillation Forensics, Adversarial Erasure, and the Illusion of Spoofing (DOI: 10.5281/zenodo.18818608)

  4. Paper 4: Provenance Generalization and Verification Scaling for Neural Network Forensics (DOI: 10.5281/zenodo.18872071)

  5. Paper 5: Beneath the Character: The Structural Identity of Neural Networks — Mathematical Evidence for a Non-Narrative Layer of AI Identity (DOI: 10.5281/zenodo.18907292)

  6. Paper 6: Which Model Is Running?: Structural Identity as a Prerequisite for Trustworthy Zero-Knowledge Machine Learning (DOI: 10.5281/zenodo.19008116)
  7. Paper 7: The Deformation Laws of Neural Identity (DOI: 10.5281/zenodo.19055966

  8. Paper 8: What Counts as Proof? — Admissible Evidence for Neural Network Identity Claims (DOI: 10.5281/zenodo.19058540)
  9. Paper 9: Composable Model Identity — Formal Hardening of Structural Attestations in the Enterprise Identity Stack (DOI: 10.5281/zenodo.19099911

  10. Paper 10:Where Identity Comes From: Path Sensitivity and Endpoint Underdetermination in Neural Network Training (DOI: 10.5281/zenodo.19118807)
  11. Paper 11: Post-Hoc Disclosure Is Not Runtime Proof: Model Identity at Frontier Scale (DOI: 10.5281/zenodo.19216634)

  12. Paper 12: Family-Dependent Response to Reasoning Distillation Across Structural and Functional Identity Layers (DOI: 10.5281/zenodo.19298857)

  13. Paper 13: Safety-Alignment Removal as a Model-Identity Failure — Structural Evidence from Published Weight-Level Mutation Checkpoints (DOI: 10.5281/zenodo.19383019)

Copyright (c) 2026 Anthony Ray Coslett / Fall Risk AI, LLC. All Rights Reserved.

Confidential and Proprietary.

Patent Pending (Applications 63/982,893, 63/990,487, 63/996,680, 64/003,244).

Files

deformation_laws.pdf

Files (324.0 kB)

Name Size Download all
md5:91f09421bf9c8f528cb7ef9aeae7e47d
324.0 kB Preview Download

Additional details

Identifiers

Software

Repository URL
https://github.com/fallrisk-ai/IT-PUF
Development Status
Active