Beyond Trees: DAG-Based Memory Architecture for Structured LLM Agents
Authors/Creators
Description
Abstract
Recent advances in Large Language Models (LLMs) have enabled agents to perform complex multi-step tasks through autonomous reasoning and memory. While tree-based memory structures organize task flows hierarchically, they fall short in modeling real-world agent behaviors, where concurrent, interleaved, or dependent tasks often arise.
In this work, we introduce a DAG-based memory framework for structured LLM agents that supports multi-task execution, multi-parent dependencies, and flexible state inheritance. Our design allows task nodes to share substeps, converge on common goals, and support rollback or re-planning without disrupting global memory consistency.
Through case studies in cooking and editing workflows, we demonstrate that DAG memory improves contextual fidelity, reduces redundancy, and enables dynamic task adaptation. Compared to tree and flat memory baselines, our framework achieves better structural efficiency, token savings, and consistency in user-driven updates.
A prototype implementation is provided, and further directions for planning integration and empirical evaluation are outlined. This version is a preprint; updates or peer-reviewed versions may be released in the future.
Files
main.pdf
Files
(446.8 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:2d353cd5d186b3ce256242ac74838aa2
|
446.8 kB | Preview Download |