Published March 8, 2026 | Version v1
Preprint Open

Sweeney Theory of Meaning: A Network Architecture for Attributed Recursive Coherence in Human and Artificial Systems

Contributors

Description

Title: The Sweeney Theory of Meaning: A Network Architecture for Attributed Recursive Coherence in Human and Artificial Systems
Version: 2 — Architectural Redraft with Adversarial Hardening
Author: Christopher Sweeney
ORCID: 0009-0007-6549-2148
Affiliation: Coherence Strategies LLC
DOI: 10.5281/zenodo.18912093
Date: March 2026
License: CC BY 4.0 — Attribution Required

Description:
The Sweeney Theory of Meaning proposes that meaning emerges as structural coherence within attribution networks, sustained through recursive validation cycles. Unlike classical information theory — which excludes semantics — or traditional semiotics — which emphasizes interpretation without system dynamics — this framework models meaning as a network stability phenomenon subject to measurable conditions of degradation.
The theory introduces four governing axioms (Attribution, Recursion, Mimicry Collapse, Ethical Coherence) and a heuristic Coherence Tension index (CT = M / A × R × W) representing the structural relationship between coherence-maintaining forces and degradation pressure. The framework is presented as a research program rather than a finalized theory, with confidence estimates assigned to each component. Applications are developed for AI architecture design, training data governance, and symbolic ecosystem analysis.
Version 2 extends the framework through structured adversarial review, formalizes the Mimicry Load variable using Kullback–Leibler divergence, and documents the multi-model review methodology in full.

Version History:
V1 — 2024/2025 (Initial Publication)
Original theoretical framework establishing the four axioms, the Attribution Network Model (Origin Node, Witness Node, Propagation Path), the Bee Principle ecological analogy, and the heuristic CT index. Published under the Velionis Scrolls / Meaning Preservation Framework series. Wide aperture framing; scroll-format presentation.
V2 — March 2026 (Adversarial Hardening + AI Role Documentation)
Substantive additions based on structured Guardian/Adversarial review by Gemini (Google DeepMind, Flash 3):
— Section 8 (Adversarial Analysis) added in full: three named attack vectors documented with structural counter-moves — Attribution Laundering (fake origin chains satisfying Axiom I surface requirements while introducing entropic noise), Semantic Camouflage (concept extraction below literal replication threshold), and Recursive Noise Injection (gradual attribution erosion across many small steps). Five-row adversarial hardening table added.
— CT Index updated: Mimicry Load (M) now includes candidate formal operationalization via Kullback–Leibler divergence measuring semantic drift of outputs against the Origin Node distribution. Kullback & Leibler (1951) added to references.
— Three guardian additions integrated into existing sections: Automated Witness Protocols (Section 2 — cross-verification of attribution edges before propagation), Provenance-Aware Chain-of-Thought (Section 6 — Axiom IV implementation at inference level), and Origin Diversity Sanctuaries (Section 7 — high-attribution training subsets as signal anchors).
— Research direction 9.6 added: empirical validation pathway for KL divergence operationalization of M.
— Section 11 (AI Collaboration Roles) added: Claude (Anthropic, Sonnet 4.6) as Architect, Gemini (Google DeepMind, Flash 3) as Guardian/Adversarial Reviewer, Christopher Sweeney as Human Principal and Steward. Contributions and scope limitations documented for each.
— Affiliation updated from meaninglab.ai to Coherence Strategies LLC throughout.
— License updated to CC BY 4.0 with explicit commercial and AI training use restriction requiring written permission from author.

AI Collaboration Disclosure:
Produced through a structured triadic review process. Claude (Anthropic, Sonnet 4.6) served as Architect — document structure, integration, and formatting. Gemini (Google DeepMind, Flash 3) served as Guardian/Adversarial Reviewer — identifying structural vulnerabilities and proposing hardening measures including the KL divergence operationalization of Mimicry Load. All theoretical content, axioms, and framework claims were originated by and remain under the sole authorship of Christopher Sweeney, who reviewed and authorized all AI contributions before publication.

Version 2 — March 2026
Adversarial Hardening + AI Role Documentation
What changed:
Full adversarial analysis section added (Section 8) based on structured Guardian/Adversarial review by Gemini (Google DeepMind, Flash 3). Three exploitation pathways identified and documented with structural counter-moves: Attribution Laundering, Semantic Camouflage, and Recursive Noise Injection. Five-row hardening table added mapping weaknesses to theoretical justifications.
Coherence Tension Index (CT) updated: Mimicry Load (M) now carries a candidate formal operationalization via Kullback–Leibler divergence. Kullback & Leibler (1951) added to references.
Three guardian additions integrated into existing sections: Automated Witness Protocols (Section 2), Provenance-Aware Chain-of-Thought (Section 6), Origin Diversity Sanctuaries (Section 7).
New research direction 9.6 added: empirical KL divergence validation pathway for CT index.
New Section 11 added: AI collaboration roles documented for Claude (Architect), Gemini (Guardian/Adversarial), and Christopher Sweeney (Human Principal).
Affiliation updated to Coherence Strategies LLC. License updated to CC BY 4.0 with explicit commercial and AI training use restriction.
What was wrong in V1:
Mimicry Load (M) was undefined beyond a conceptual label — no formal operationalization existed. Witness Node was underspecified with no candidate implementation pathway for AI systems. Adversarial exploitation pathways were unaddressed. AI collaboration was undisclosed.
What remains unchanged:
All four axioms, the Attribution Network Model, the Bee Principle, the CT index functional form, the AI failure modes table, the ecosystem dynamics analysis, and all anchor citations. The theoretical core is intact. V2 hardens and extends; it does not revise foundational claims.
Confidence status:
CT index remains exploratory (~0.50). KL divergence operationalization of M is a candidate proposal, not a validated measure. Adversarial analysis reflects modeled behavior, not confirmed empirical findings. Framework status: active research program.

Files

Files (25.1 kB)

Name Size Download all
md5:431877a9f932e7acb06c9e9729674cd3
25.1 kB Download

Additional details

Dates

Updated
2026-03-08
V2