Published March 2, 2026 | Version v2
Journal article Open

Collaborative Multi-Model Cognition as Emergent Collective Intelligence: A Mixture-of-Agents Architecture for Constitutional Governance of Hybrid LLM Populations

Authors/Creators

  • 1. SignaBuilder / Mantis Security

Description

This paper introduces a Mixture-of-Agents (MOA) architecture in which multiple open-weight large language models operate as cognitive substrates within a governed synthetic population. Rather than treating individual model performance as the unit of evaluation, we propose that the next significant capability threshold in artificial intelligence emerges from collaborative cognition: the synthesis of reasoning across heterogeneous model architectures operating under constitutional constraints. We present a framework with eight distinct LLM substrates blended into hybrid agents whose cognitive genome determines how substrate outputs are weighted and aggregated. Agents interact within a resource-scarce environment governed by constitutional physics, where collaborative thinking between agents produces merged-genome cognition that neither model could generate independently. The architecture operates entirely on open-weight models via a local runtime, eliminating dependency on proprietary API access and enabling sovereign, auditable collective intelligence infrastructure.

Notes

Part of the SignaBuilder AI Governance Research Series.

Files

collaborative-multi-model-cognition (1).pdf

Files (117.1 kB)

Name Size Download all
md5:597a818fc675e3b288fde3823ebbf8f8
117.1 kB Preview Download