Published January 3, 2026 | Version v1
Preprint Open

Semantic Transition Field: A Unified Theory of Reading for Humans and AI

Authors/Creators

Description

Reading is commonly modeled in machine learning as feeding a sequence of tokens into a model and producing an output. This operationalization, however, obscures the cognitive mechanics that make human reading robust: the dynamic interplay between lexical semantic priors and context-dependent combinatory inference. We present the Semantic Transition Field (STF), a theory that formalizes reading as a two-stage process—lexical semantic decoding and contextual semantic transition—and shows how both human and artificial readers instantiate the same computational principles. Building on distributed lexical representations (word vectors) and modern attention-based architectures, STF posits an explicit decomposition: a lexical decoder maps embeddings to distributions over latent concepts, and a transition operator composes and updates these distributions across context. We derive formal definitions, relate STF to Transformer mechanisms, prove representational properties, propose concrete architectures and training objectives that implement STF, and outline experiments demonstrating improved compositional generalization and interpretability. Finally, we argue that STF offers a unifying explanatory lens for human psycholinguistic findings and for emergent behaviors in large language models, suggesting paths for more sample-efficient, explainable, and human-aligned text understanding.

Files

Semantic Transi.pdf

Files (259.2 kB)

Name Size Download all
md5:2290751263aed81c61f2841d0dc2e44f
259.2 kB Preview Download