Published January 14, 2026 | Version v1.0
Preprint Open

The Holonomy Transformer: A Geometrically-Native Neural Architecture for Consistent Reasoning

Authors/Creators

Description

This work introduces the Holonomy Transformer (HoT), a neural architecture that embeds geometric consistency constraints directly into transformer computation. Tokens are represented as sections of a fiber bundle, and attention is computed via parallel transport with holonomy-based costs that structurally suppress inconsistent information flow.

The architecture enforces reasoning consistency as a geometric property rather than a learned statistical regularity, using holonomy penalties, curvature-gated feedforward layers, and waypoint-based routing. A companion technical report describes extensions in which creativity and exploration are treated as cost-guided deviations within the learned geometric manifold.

This submission presents the core architecture and theoretical framework. Empirical scaling and benchmarking are left to future work.

Files

hot_technical_report.pdf

Files (713.7 kB)

Name Size Download all
md5:99d2602dff53fc1c285bbe51e0340eac
713.7 kB Preview Download

Additional details