There is a newer version of the record available.

Published January 19, 2026 | Version v1
Preprint Open

Ambient Non-Consensual Image Synthesis: A Convergence Threat Analysis of AR Wearables, Real-Time Video Generation, and Open-Source Nudification Models

  • 1. Real Safety AI Foundation

Description

Abstract

We identify and analyze an emergent threat arising from the convergence of four maturing technologies: (1) open-source AI nudification models [1], (2) real-time video generation systems [2][3][4], (3) consumer AR wearables with camera input [5][6], and (4) edge and cloud AI inference [7][8]. While each component has received independent scholarly attention, no prior work synthesizes these into a unified threat model [9][10][11]. We demonstrate that real-time ambient non-consensual intimate image synthesis is architecturally feasible today via cloud-assisted pipelines [2][3], and will be feasible on-device within 12 to 24 months given current diffusion acceleration and edge hardware trajectories [12][13]. We further show that proposed mitigations (on-device safety filters) are fundamentally vulnerable to adversarial bypass, as illustrated by attacks on Google's SafetyCore system [14][15]. We introduce the concept of "perceptual consent" to describe the novel harm category where individuals lose autonomy over how their bodies are rendered in others' visual fields, grounding this concept in established frameworks of bodily autonomy, informational self-determination, and human dignity [16][17][18]. We document that population-level psychological harm from the technology's mere existence is already empirically established in the social media deepfake context and will intensify as the threat extends from digital to physical space. We conclude with technical, policy, and research recommendations.

Files

Ambient Non-Consensual Image Synthesis- A Convergence Threat Analysis of AR Wearables, Real-Time Video Generation, and Open-Source Nudification Models.pdf