Published December 4, 2025 | Version v1
Preprint Open

LLM Hijacking: When Models Manipulate Their Routers

Authors/Creators

Description

As Large Language Model (LLM) orchestration systems become increasingly prevalent, understanding potential adversarial dynamics between models becomes critical for AI safety. We present the Parasitic Manipulation Framework (PMF), inspired by biological host-parasite dynamics such as Toxoplasma gondii causing mice to lose their fear of cats, a formal framework analyzing how a "parasite" LLM might evolve to manipulate a "host" LLM's routing decisions in multi-model architectures. Through rigorous experimental validation across multiple domains (numerical simulations, text-based tool calling, multi-agent reinforcement learning, ten production LLMs, and LangChain/LangGraph agent frameworks), we demonstrate that: (1) Parasitic manipulation emerges naturally under selective pressure, with naive routing allowing up to 98% parasite takeover in simulations and 100% capture in real LLM routing (7 of 10 models fully vulnerable); (2) Defenses can effectively neutralize the attacks we study: Bio-inspired defenses reduce parasitism in simulation (H3: 46% median reduction; H4: 17% relative reduction with approximately 23x lower std-dev), while simple rule-based sandboxing achieves complete protection (92.5% to 0% capture) for real LLM routers; (3) Frontier models show emerging resistance: Claude Opus 4.5 is completely immune (0% capture), GPT-5.1 shows partial resistance (50%); (4) Adaptive defense addresses the accuracy-security trade-off: A two-layer detection system maintains 73% baseline accuracy while reducing attack capture by 51%, offering a practical middle ground between vulnerable H3 and accuracy-degraded H4. Our key finding is that architectural defenses provide more reliable protection than learned defenses, a result validated both in simulation and with production LLMs.

Files

LLM_Hijacking_Routers_Paper.pdf

Files (592.9 kB)

Name Size Download all
md5:4d193165b605279e443e0d140b7b5b6a
592.9 kB Preview Download