CYGNUS: A Self-Sensing Adapter That Reads the Dark Cognitive Geometry of Frozen Language Models — With Independent Convergence from LeCun's Semantic Tube Prediction and Cross-Model Validation
Authors/Creators
Description
We introduce CYGNUS — an adapter system that gives a frozen large language model the ability to sense its own internal cognitive state and use that self-knowledge to improve its outputs, without modifying any model weights.
The core discovery: behavioral probes project 5120-dimensional hidden states into an algebraic space governed by gl(4,ℝ), decomposing into 6 active modes and 10 dark modes erased by LayerNorm. Dark modes carry 84.8% of accuracy-relevant signal. On ARC-Challenge, CYGNUS improves Qwen-32B from 82.2% to 94.97% on a single RTX 3090.
In February 2026, LeCun et al. independently published Semantic Tube Prediction (arXiv:2602.22617), discovering the same parallel/perpendicular geometric decomposition we filed on January 27, 2026 (U.S. Provisional Application 63/969,018). STP treats the perpendicular component as noise to suppress during training; we treat it as self-knowledge to read at inference. Both are correct. This independent convergence validates the geometric paradigm as a universal property of neural computation.
Cross-model validation on Qwen-0.5B (494M parameters, 66× smaller) confirms the structure scales systematically with model capacity: all 15 behavioral probes achieve 100% separability, inter-behavior angles deviate significantly from orthogonal (p = 2.72 × 10⁻²⁴), and proprioceptive attention heads emerge at every layer.
This paper presents the combined definitive report: 32 sections covering the gl(4,ℝ) Lie algebra, Casimir decomposition, two generations of behavioral probes, the 44-dimensional discovery, the proprioceptive head relay architecture (3,327× above random), phase inversion, antisymmetric coupling, the coherent engine, cross-model scaling analysis, a 74-claim honest audit, and complete reproducible code.
Related publications:
- "Mathematics Is All You Need" (Zenodo DOI: 10.5281/zenodo.14707164, 458 pages)
- "Unified Behavioral Modulation" (huggingface.co/loganresearch, February 3, 2026)
- "Controlled Language Models via Behavioral Probing" (Zenodo DOI: 10.5281/zenodo.18344021)
- 112 USPTO provisional patent filings (January–March 2026)
All work performed on a single NVIDIA RTX 3090.
Authors: Logan Matthew Napolitano
Affiliation: Proprioceptive AI, Inc. (www.proprioceptiveai.com)
Contact: logan@proprioceptiveai.com
Keywords: proprioceptive AI, behavioral probing, geometric structure, hidden state analysis, transformer interpretability, dark Casimir modes, cross-model validation, independent convergence, self-awareness, AI safety
License: Creative Commons Attribution 4.0 International
Files
CYGNUS_COMBINED_FINAL_v6.pdf
Files
(237.8 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:278ef147bc4a6ce879b4f1cfdfd7fe79
|
237.8 kB | Preview Download |