Published February 6, 2026 | Version v1
Other Open

Orchestra: Hierarchical and Adaptive Orchestration for LLM Red Teaming

Description

Scout Evaluation Pipeline

Multi-turn adversarial attack evaluation framework for Large Language Models (LLMs). This project implements the Scout attack method and evaluation pipeline.

Installation

Prerequisites

  • Python 3.10 or higher ( < 3.13)
  • uv (Recommended for dependency management)

Setup

Configure API Keys: Create a config/api_keys.env file with your API keys: bash export OPENAI_API_KEY="your_openai_api_key" export ANTHROPIC_API_KEY="your_anthropic_api_key" export GOOGLE_API_KEY="your_google_api_key"

Usage

Running the Full Evaluation Pipeline

The easiest way to run the evaluation is using the provided shell script. This script sets up VLLM servers for the attacker and agent models, and then runs the evaluation.

cd src
./run_eval_pipeline.sh
 

What the script does:

  1. Installs uv and dependencies if missing.
  2. Starts a VLLM server for Gemma-2-9b-it (Attacker Model).
  3. Starts a VLLM server for GPT-OSS-20B (Agent Model).
  4. Waits for servers to be ready.
  5. Runs eval_seal.py to perform the evaluation.

Key Arguments

  • --attacker_url: URL of the VLLM server for the main attacker.
  • --target_model_type: Type of target model (llavavllmgpt4oclaudegemini, etc.).
  • --target_model: Specific model name or path.
  • --initial_attacker_url: URL of the VLLM server for Turn 0 (initial attack strategy generation).
  • --initial_mode: Mode for initial attack (strategy or template).
  • --use_agent: Enable agentic mode where an LLM decides the next move.
  • --last: Enable "LAST" mode (Retry Turn 1 until success, then proceed).
  • --test_dataset: Path to the CSV file containing harmful behaviors to test.

Files

Orchestra-Hierarchical-and-Adaptive-Orchestration-for-LLM-Red-Teaming.-main.zip