Non-Linguistic Semantic Transmission via Vector-Field Images
Authors/Creators
Description
This paper documents a series of controlled experiments investigating whether abstract, non-linguistic visual structures can function as carriers of semantic information for large language models with vision capabilities. Specifically, we examine images structured as dynamic vector-field–like systems and test whether such images transmit process-level meaning—not symbols, text, or narratives—that is independently reconstructed by different AI models with high structural consistency.
Across multiple positive controls and deliberately designed negative tests, we observe that models reliably decode dynamic invariants (such as attractors, vector flows, stability regimes, and interaction topologies) rather than surface aesthetics or authorial intent. Attempts to eliminate meaning by introducing visual irregularities or “broken grammar” frequently fail, suggesting that the threshold for semantic emergence in such representations is lower than expected, provided global dynamical coherence is preserved.
We further note an emergent structural resemblance between these vector-field representations and known organizational patterns in biological neural systems. We interpret this resemblance not as biomimicry, but as convergence toward shared dynamical principles governing distributed cognitive systems.
This work does not claim the discovery of a new language. Instead, it proposes that vector-field images may act as post-linguistic semantic carriers, aligned with how artificial—and potentially biological—systems internally represent meaning through dynamics rather than symbols.
Files
Non-Linguistic Semantic Transmission via Vector-Field Images.pdf
Files
(10.4 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:e89bdc9a73a31e00283030e2af0c12b1
|
10.4 MB | Preview Download |