Published October 30, 2025 | Version CC BY-NC-ND 4.0
Journal article Open

Responsible Disclosure in the Age of Generative AI: A Normative Model for Dual-Use Risk

  • 1. Department of Digital Business, Transformation & Innovation, IE Business School, Madrid, Spain.

Contributors

Contact person:

  • 1. Department of Digital Business, Transformation & Innovation, IE Business School, Madrid, Spain
  • 2. Department of Information Technology, Zain, Riyadh, Saudi Arabia

Description

The rapid growth of generative artificial intelligence (AI) systems such as large language models (LLMs) has created a profound disclosure dilemma: when should potentially dangerous models or findings be shared openly, withheld, or released in a controlled manner? Traditional norms of open science and opensource software emphasize transparency, reproducibility, and collective progress, yet the dual-use nature of frontier LLMs raises unprecedented challenges. Unrestricted disclosure can enable malicious use cases such as cyberattacks, automated disinformation campaigns, large-scale fraud, or even synthetic biology misuse. In contrast, excessive secrecy risks undermining trust, slowing scientific progress, and concentrating power in a small number of actors. This paper develops a normative model for responsible disclosure that integrates utilitarian, deontological, and virtue-ethical reasoning to justify a proportional approach ratherthan binary openness orsecrecy. We introduce a Disclosure Decision Matrix that evaluates four key dimensions: risk severity, exploitability, mitigation availability, and public benefit of transparency. It then recommends one of three courses of action: full release, staged or controlled release, or temporary restriction until safeguards mature. The contribution is twofold. First, it provides a principled ethical framework that links philosophical justification directly to operational disclosure practices, bridging the gap between theory and governance. Second, it translates this framework into actionable criteria that policymakers, research institutions, and developers can consistently apply across evolving AI systems. By combining ethical reasoning with practical decision tools, the findings underscore that responsible disclosure in AI is neither absolute secrecy nor unqualified openness but a dynamic, proportional strategy responsive to both technological advances and societal risks.

Files

L115514121125.pdf

Files (655.8 kB)

Name Size Download all
md5:f735c22c7a9faa3621a95fc252919126
655.8 kB Preview Download

Additional details

Identifiers

Dates

Accepted
2025-10-15
Manuscript received on 03 October 2025 | Revised Manuscript received on 11 October 2025 | Manuscript Accepted on 15 October 2025 | Manuscript published on 30 October 2025

References