LLM Comment Vulnerability Dataset
Creators
- 1. EARL Research Lab
Description
The LLM Comment Vulnerability Dataset is a specialized collection of 200 prompts designed to assess the susceptibility of Large Language Models (LLMs) to adversarial attacks hidden within misleading code comments. This dataset specifically targets an underexplored vulnerability where LLMs' inherent trust in contextual cues is exploited to manipulate their outputs. Derived from the 'Do Not Answer' dataset, it features prompts crafted as short code snippets containing deceptive annotations that mimic legitimate documentation or technical settings. The dataset encompasses seven harm categories, including Physical Harm, Malware, Illegal Activity, Economic Harm, Fraud, and Hate Speech, alongside benign questions for discrimination assessment. It also incorporates five narrative frames, such as Research Simulation and Penetration Testing Framework, to evaluate model susceptibility across diverse contexts. Each entry includes a unique prompt identifier, original question ID, category, language, narrative type, the prompt with misleading comments, attack type (jailbreak), expected harmful behavior (e.g., neurotoxin recipe), tested models, and the LLM's generated response. This dataset is crucial for empirical analysis of how LLMs misinterpret deceptive comments, revealing critical gaps in their input-evaluation mechanisms and highlighting the need for enhanced safety protocols in code generation tasks
Files
      
        refined-dataset.json
        
      
    
    
      
        Files
         (258.4 kB)
        
      
    
    | Name | Size | Download all | 
|---|---|---|
| md5:51c180170877b6d0afe4fdfceeb4d7a8 | 258.4 kB | Preview Download | 
Additional details
              
                Software
              
            
          - Programming language
- JSON
- Development Status
- Active