Published February 6, 2026
| Version v1
Other
Open
Orchestra: Hierarchical and Adaptive Orchestration for LLM Red Teaming
Description
Multi-turn adversarial attack evaluation framework for Large Language Models (LLMs). This project implements the Scout attack method and evaluation pipeline.
- Python 3.10 or higher ( < 3.13)
- uv (Recommended for dependency management)
Configure API Keys: Create a config/api_keys.env file with your API keys: bash export OPENAI_API_KEY="your_openai_api_key" export ANTHROPIC_API_KEY="your_anthropic_api_key" export GOOGLE_API_KEY="your_google_api_key"
The easiest way to run the evaluation is using the provided shell script. This script sets up VLLM servers for the attacker and agent models, and then runs the evaluation.
cd src
./run_eval_pipeline.sh
What the script does:
- Installs
uvand dependencies if missing. - Starts a VLLM server for Gemma-2-9b-it (Attacker Model).
- Starts a VLLM server for GPT-OSS-20B (Agent Model).
- Waits for servers to be ready.
- Runs
eval_seal.pyto perform the evaluation.
--attacker_url: URL of the VLLM server for the main attacker.--target_model_type: Type of target model (llava,vllm,gpt4o,claude,gemini, etc.).--target_model: Specific model name or path.--initial_attacker_url: URL of the VLLM server for Turn 0 (initial attack strategy generation).--initial_mode: Mode for initial attack (strategyortemplate).--use_agent: Enable agentic mode where an LLM decides the next move.--last: Enable "LAST" mode (Retry Turn 1 until success, then proceed).--test_dataset: Path to the CSV file containing harmful behaviors to test.
Files
Orchestra-Hierarchical-and-Adaptive-Orchestration-for-LLM-Red-Teaming.-main.zip
Files
(362.3 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:eaee5a06f1b99b5979a61e89c1044a65
|
362.3 kB | Preview Download |