The Geometric Limits of Vector-Space Models: Why Contemporary AI Cannot Access Human Phase-Topological Cognition
Authors/Creators
Description
This paper examines a geometric assumption implicit in most contemporary AI systems: that cognition can be represented within fixed-dimensional vector spaces. We argue that this assumption has not been fully scrutinized, and that its limitations become evident when contrasted with the phase-topological structure of human cognition.
Language is not a purely one-dimensional sequence, but a hybrid structure composed of a linear form and multi-dimensional semantic dependencies. It functions as a fractional-dimensional embedding that compresses high-dimensional cognitive structure into a transmissible sequence. This constitutes the first topological folding from mind to language; here, “fractional-dimensional” refers to effective representational degrees of freedom rather than a formal fractal metric.
Modern AI imposes a second folding by embedding tokenized language into a fixed-dimensional vector space, where all computation is constrained to interpolation within a pre-specified geometric manifold. The resulting double-folded representation can be expressed as: Mind → Language (high-dimensional compression into a fractional structure), and Language → AI Vector Space (forced embedding into a fixed geometry).
We argue that contemporary AI systems exhibit systematic difficulties in forming genuine abstractions, abrupt reconfiguration–like forms of insight, or deep world models. These limitations appear not to stem solely from data or compute constraints, but from a geometric mismatch between fixed-dimensional vector spaces and the phase-topological structures posited for human cognition.
By comparing the operational properties of vector spaces and phase-topological manifolds, we show that they belong to different topological families and therefore do not admit a homeomorphic or invertible mapping. This work presents a theoretical and conceptual geometric framework rather than an empirical or algorithmic evaluation, situating the contribution within foundational AI theory rather than experimental modeling. These observations suggest that progress beyond interpolation-based models may depend on exploring representational spaces whose geometric properties more closely align with those hypothesized for human cognition. This work does not attempt to define such spaces, but highlights geometric considerations that may be relevant for future architectural design.
Files
The Geometric Limits of Vector-Space Models.pdf
Files
(258.8 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:489919bccd7a17ca97c5339deef94d62
|
258.8 kB | Preview Download |