There is a newer version of the record available.

Published March 11, 2025 | Version v5
Preprint Open

Documenting Post-Hoc XAI Systems: An Initial UML Approach for EU AI Act Compliance

Description

Artificial Intelligence (AI) has gained prominence in recent years, being widely applied in both academic and industrial contexts. Its popularization has raised several challenges, particularly the need to make AI models auditable. Explainable Artificial Intelligence (XAI) seeks to address this issue through methods that interpret the decisions of black-box models. Despite its progress, few studies integrate XAI into the software engineering cycle. At the same time, the European Union’s AI Act (Regulation 2024/1689) requires extensive documentation for high-risk systems, often resulting in hundreds of pages of reports. To bridge this gap, this work proposes An Initial UML Approach for EU AI Act Compliance, which unifies UML, XAI, and regulatory documentation practices. The approach introduces stereotypes, tagged values, and relationships for LIME, SHAP, ICE, and Ceteris-based explanations. By graphically representing critical XAI elements, it enhances traceability and auditability while providing partial coverage of the compliance requirements, serving as a structured complement to the mandatory textual documentation. The proposal is illustrated through a case study involving a breast cancer diagnosis system.

Files

An Initial UML Approach for EU AI Act Compliance - v3.1.pdf

Files (481.6 kB)

Additional details

Dates

Created
2025-08-13
First upload.
Updated
2025-09-19
Second upload. Improved the research.
Updated
2025-09-26
Third upload. Fixed some inconsistencies.
Updated
2025-10-25
4th upload. Improved the research.
Updated
2025-11-03
5th upload. Minor changes.

Software

Repository URL
https://github.com/miklotovx/diag_system_en
Programming language
Python
Development Status
Active

References