Spatial Unified Network for Cross-Environment Semantic Reuse
Authors/Creators
Description
This paper introduces Spatial Unified Network (S.U.N.), an architectural framework for separating semantic object identity from perceptual representation in agent environments. The paper argues that object meaning should remain behaviorally available across differing perceptual routes rather than being tied to any single detection method, engine, or sensor stack.
S.U.N. is presented as a spatial-semantic architecture in which environments provide structured spatial fields, objects are situated within that field, and semantic identities are resolved independently of the specific sensory input through which they are encountered. The contribution is architectural rather than substrate-final: the paper does not propose a complete robotics stack or finalized spatial substrate, but instead defines an architectural basis for semantic reuse across environments.
A minimal engine-agnostic demonstration illustrates how heterogeneous perceptual inputs may be mapped to a shared semantic object layer from which aligned behavior can be selected. The paper also outlines how the same framework may remain compatible with richer spatial implementations, including node-local constraint properties such as density, cohesion, compaction, permeability, or deformation-relevant state, without collapsing object meaning into raw local representation.
This paper is released as a companion architectural note related to the broader PUTMAN/agent-architecture research program, while remaining readable as a standalone work.
Files
PaperCompanion2_Spatial_Unified_Network_for_Cross-Environment_Semantic_Reuse_v0.2.pdf
Files
(165.4 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:5855baab849fdffb226d7713f8de85fa
|
165.4 kB | Preview Download |
Additional details
Software
- Repository URL
- https://github.com/putmanmodel/sun-semantic-identity-demo
- Programming language
- Python
- Development Status
- Active