Published February 15, 2026 | Version v2
Software documentation Open

Autonomous Red Team AI — LLM-Guided Adversarial Security Testing

Authors/Creators

  • 1. Farzulla Research

Description

This technical report presents a framework for autonomous red team agents using large language models (LLMs) for adversarial security testing. We introduce a four-layer architecture combining LLM-guided decision making, retrieval-augmented generation (RAG) knowledge bases, containerized security toolkits, and kernel-level network isolation. The system implements an OODA (Observe, Orient, Decide, Act) loop where agents autonomously query offensive security knowledge bases, formulate attack strategies, execute sandboxed commands, and adapt based on observed results. Key architectural decisions include agent-orchestrated rather than LLM-orchestrated control flow (addressing limitations in abliterated models’ structured output capabilities), NetworkPolicy-based isolation providing provable containment, and command sandboxing with whitelist/blacklist patterns. We describe a proof-of-concept implementation achieving autonomous SSH compromise in approximately 90 seconds across 1–3 command iterations. The report discusses the dual-LLM adversarial competition hypothesis—where separate red team and blue team agents with asymmetric knowledge bases may produce more realistic security testing than single-model approaches—and outlines safety considerations for responsible deployment.

Version update (February 2026): Expanded literature review with additional citations and substantive engagement with recent scholarship.

Files

autonomous-redteam.pdf

Files (235.8 kB)

Name Size Download all
md5:a3befa5fdd31344851597d65a93a4887
235.8 kB Preview Download

Additional details