Published December 24, 2025 | Version 1.0
Publication Open

Semiotic Frustration in Machine Learning: Resolving the Dimensionality Paradox Through Reductionist Taxonomy

Description

This paper identifies a persistent semiotic conflation in machine learning: the term “dimensionality” is routinely used to denote vector/parameter count rather than true dimensional expansion—the addition of independent axes that create exponential volume and new topological invariants. This conflation merges two fundamentally distinct mechanisms: vector aggregation (adding correlated coordinates within fixed topology) and structural expansion (introducing new degrees of freedom)..pdf)

To correct this, the paper introduces a reductionist taxonomy that distinguishes:

  • Vectoral Containment: correlated vector stacking in a fixed topology, generating redundancy and degeneracy rather than new volume.
  • True Dimensional Expansion: addition of independent axes producing exponential configuration‑space growth, new invariants, and qualitative structural transformation.
  • VNode (Vector‑Node): the canonical unit of aggregation, enabling formal separation of nominal coordinate count from effective independence..pdf) 

By restoring the invariant meaning of dimension to its geometric and physical definition, the framework resolves the dimensionality paradox: contemporary models grow in parameter count but not in independent structure. This explains the recurring empirical signatures across modern ML systems—low intrinsic dimension, flat minima, low‑rank Fisher spectra, degeneracy amplification, thin‑shell sparsity, and scaling plateaus—without requiring new algorithms or exotic mechanisms. These behaviors emerge naturally when aggregation is mistaken for expansion..pdf)

Intended contribution: a minimal, systems‑theoretic primitive set that aligns machine learning terminology with geometry, topology, and physics. The taxonomy unifies observations across embedding geometry, optimization behavior, manifold structure, and scaling limits, providing a coherent interpretive lens for future work on independence, representation, and structural capacity. It makes no empirical claims or architectural prescriptions; its purpose is conceptual correction, not algorithmic innovation..pdf)

Files

semiotic-frustration-in-machine-learning-v1.0.pdf

Files (456.3 kB)

Name Size Download all
md5:89a9b3b6f2e76aadf0a7ccae2536b4bc
456.3 kB Preview Download

Additional details

Additional titles

Subtitle
A Systems-Theoretic Framework for Vectoral Containment, Manifold Geometry, and True Dimensional Expansion

Dates

Submitted
2025-12