There is a newer version of the record available.

Published September 16, 2025 | Version 1.0
Preprint Open

CXOD-7 and Coh(G): A Contextual Offense and Defense Evaluation Framework for AI Safety

Creators

  • 1. Independent Researcher

Description

Large Language Models (LLMs) have rapidly permeated fields including healthcare, psychological support, education, creative writing, and business decision-making. However, without mature safety governance and ethical frameworks, context has often been mistakenly equated with prompts, limiting research to superficial engineering issues.

As an independent researcher pioneering this field, this study introduces the CXOD-7 Seven-Core Contextual Framework and Coh(G) Contextual Coherence as interdisciplinary tools. These establish "context" and "Contextual Offense & Defense (CXOD)" as a distinct research branch, separate from prompt engineering.

The philosophical foundation of CXOD recognizes that "offense" and "defense" are not opposing forces, but mirror images that jointly define the logic of contextual safety. "Offense" represents the simulation of risks to reveal hidden vulnerabilities through adversarial testing and contextual stress, while "Defense" represents the construction of resilience to preserve the model's essential nature.

Using a 7×7 offense-defense matrix experiment with Block Rate, Faithfulness, and Coh(G), this study builds a comprehensive evaluation framework. Results show average Block Rates of 0-20% and Faithfulness of 90-92%, proving context ≠ prompt.

Files

CXOD7_Paper.pdf

Files (1.6 MB)

Name Size Download all
md5:2cf7546c70441e86fc90b65aa75c39ba
137.3 kB Preview Download
md5:2ef066ccb7c7db5f256b21d4b81bebf6
1.5 MB Preview Download