There is a newer version of the record available.

Published May 27, 2025 | Version initial
Software Open

USENIX Security 25' Cycle2-592-Attention-Exploit-Artifact-Evaluation

  • 1. ROR icon University of Arizona
  • 2. ROR icon Purdue University System
  • 3. ROR icon Purdue University West Lafayette
  • 4. Qualcomm Technologies Inc.

Description

Intro

This repo contains the official implementation for the paper#592 accepted at USENIX Security 2025 Cycle 2 - From Threat to Trust: Exploiting Attention Mechanisms for Attacks and Defenses in Cooperative Perception, including the attack SOMBRA and the defense LUCIA.

Due to file count limit, the source code is zipped in SOMBRA_LUCIA.zip

Dataset and Model Download

Please visit the official website of OPV2V for latest dataset download instructions. Our evaluation is conducted on the test split of the data.

Pretrained model weights can be downloaded from OpenCOOD repo. We included Attentive Fusion, CoAlign, Where2comm, and V2VAM in our evaluation.

Environment Setup

We use pixi for easier and faster env setup. More information can be found at here

We tested on CUDA 11.8 / 12.0. Please edit the pixi.toml file to change the pytorch-cuda version and spconv-cu118 version accordingly (e.g., spconv-cu120).

Next, with pixi installed, simply run the following command to get the package installed (if not yet) in the virtual environment and activate it (to deactivate simply run exit).

pixi shell

Finally, setup and build the dependencies for OpenCOOD, NMS GPU version, and CoMamba (optional) using the following commands:

pixi run opencood_setup
pixi run nms_gpu_build

Evaluation

For evaluation, use python cp_attack.py with corresponding arguments. Use --help arguments to show all available arguments. Use --loss sombra for our attack SOMBRA, --loss pa for attack using the loss from prior art. Specify --defense for LUCIA and --robosac for ROBOSAC defense.

For targeted object removal, you can specify the target object using --target_id followed by corresponding object ID in the dataset, or in/out for randomly sampled target within/beyond victim's Line-of-Sight, or random for just, a randomly sampled target.

Example:

python cp_attack.py --model_dir <path_to_model, e.g. attfusion> --model AttentiveFusion --data_dir <path_to_opv2v_test> --attack_mode mor --loss sombra (--defense)

The detailed attack results would be saved under the same folder as the model weight.

Case Study

For the traffic jam case study, the dataset is zipped in traffic_jam_data.zip.

The evaluation is done in two parts to save time on perturbation generation. 

First, run 

python cp_attack.py --model_dir <path_to_model, e.g. attfusion> --model <Model_Name> --data_dir <Path_to_Traffic_Jam> --attack_mode mor --loss sombra --save_perturb

to save the perturbation generated with knowledge of only the attacker and victim's features.

Next, rename the the folder that stores the pertubed attacker feature as `adv_feature`, and runs the following

python case_study.py --model_dir <path_to_model, e.g. attfusion> --data_dir <path_to_traffic_jam> 

Files

SOMBRA_LUCIA.zip

Files (197.4 MB)

Name Size Download all
md5:433cf9a841413578bea90fc47c1a41d1
339.2 kB Preview Download
md5:079380e46586d638ebd0bf71e9a7cb1b
197.1 MB Preview Download

Additional details

Dates

Available
2025-05-27