AICL: A Control-Loop Architecture for Stable Long-Horizon LLM Agents
Description
Abstract
Large language models (LLMs) exhibit impressive reasoning abilities but remain fragile when operating over long horizons: they drift, accumulate inconsistency, and produce outcomes that are difficult to monitor or reproduce. This work introduces the Artificial Intelligence Control Loop (AICL)—a general-purpose control-loop architecture designed to stabilize and regulate long-horizon LLM agent workflows.
AICL formalizes agentic reasoning as a closed-loop process consisting of:
(1) structured planning,
(2) probe-driven monitoring,
(3) event-based orchestration, and
(4) quantitative stability budgets that bound drift and behavior variance.
We present the theoretical motivation, architectural components, stability probes, and mechanisms for runtime regulation. We also release CyberLoop, an open-source reference implementation that demonstrates AICL’s practical applicability to multi-step investigations, iterative reasoning loops, and resource-bounded agent workflows.
Experiments and qualitative evaluations show that AICL improves reproducibility, reduces drift, and enables more stable long-horizon decisions across diverse LLM settings.
As AI systems become increasingly autonomous and persistent, AICL provides a foundation for building reliable, interpretable, and operationally scalable intelligent agents.
Files
AICL.pdf
Files
(356.9 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:64b08445c20170582c9d5d48c6c88c96
|
356.9 kB | Preview Download |
Additional details
Related works
- Is supplemented by
- Software: 10.5281/zenodo.17835643 (DOI)
Software
- Repository URL
- https://github.com/roackb2/cyberloop
- Programming language
- TypeScript
- Development Status
- Active