Documenting AI Systems under the EU AI Act: A UML Framework for Post-Hoc XAI Compliance
Authors/Creators
Description
Artificial Intelligence (AI) has gained prominence in recent years, with widespread adoption in academic and industrial contexts raising challenges related to the auditability of AI-based systems. Explainable Artificial Intelligence (XAI) addresses this issue through post-hoc methods that provide insight into model decisions. However, the integration of XAI mechanisms into software engineering artifacts and architectural representations remains limited. At the same time, the European Union’s AI Act (EU AI Act, Regulation 2024/1689) demands extensive technical requirements for high-risk AI systems, which in practice often lead to large, fragmented, and costly compliance efforts that are difficult to maintain, verify, and trace back to concrete system implementations. To address this gap, this work proposes a UML-based framework for documenting post-hoc XAI systems aligned with EU AI Act requirements. The framework introduces a minimal set of Unified Modeling Language (UML) stereotypes, tagged values, and relationships to represent data sources, training orchestration, trained models, and associated explainability mechanisms, relying on architectural information directly derivable from object-oriented (OO) source code. As an additional contribution, this work introduces the UMLOOModeler, a tool that automates the generation of UML class diagrams from OO Python implementations, ensuring consistency between code-level artifacts and architectural representations. The framework is illustrated through examples involving heterogeneous data modalities, demonstrating support for architectural traceability and auditability across different XAI pipelines.
Files
Documenting AI Systems under the EU AI Act - A UML Framework for Post-Hoc XAI Compliance .pdf
Files
(536.7 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:d2d19dcef0ba4fa901efede062933408
|
536.7 kB | Preview Download |
Additional details
Dates
- Created
-
2025-08-13First upload.
- Updated
-
2025-09-19Second upload. Improved the research.
- Updated
-
2025-09-26Third upload. Fixed some inconsistencies.
- Updated
-
2025-10-254th upload. Improved the research.
- Updated
-
2025-11-035th upload. Minor changes.
- Updated
-
2026-01-286th upload. Improved the research.
Software
- Repository URL
- https://github.com/miklotovx/UMLOOModeler
- Programming language
- Python
- Development Status
- Active
References
- [1] EUROPEAN UNION. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (AI Act). Official Journal of the European Union, L 1689, 12 July 2024, p. 1–147. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
- [2] EUROPEAN COMMISSION. Assessment List for Trustworthy Artificial Intelligence (ALTAI) for Policy Makers. High-Level Expert Group on Artificial Intelligence, 2020/2025 update. Available at: https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment
- [3] MUSTAC, T.; HENSE, P. SoS AI: An AI System of Systems Approach for EU AI Act Conformity. SSRN Electronic Journal, 2025. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5280073
- [4] STAMPERNAS, S.; LAMBRINOUDAKIS, C. A Framework for Compliance with Regulation (EU) 2024/1689 for Small and Medium-Sized Enterprises. Journal of Cybersecurity and Privacy, 2025. https://doi.org/10.3390/jcp5030040
- [5] BARREDO ARRIETA, A. et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, v. 58, p. 82–115, 2020. https://doi.org/10.1016/j.inffus.2019.12.012
- [6] SAARELA, M.; PODGORELEC, V. Recent Applications of Explainable AI (XAI): A Systematic Literature Review. Applied Sciences, 14(19):8884, 2024. https://doi.org/10.3390/app14198884
- [7] SELIC, B. A systematic approach to domain-specific language design using UML. In: 10th IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing (ISORC), 2007. p. 2–9. https://doi.org/10.1109/ISORC.2007.10
- [8] OBJECT MANAGEMENT GROUP (OMG). OMG Unified Modeling Language (UML), Version 2.5. 2015. Available at: https://www.omg.org/spec/UML/2.5
- [9] RIBEIRO, M. T.; SINGH, S.; GUESTRIN, C. "Why should I trust you?": Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, New York, NY, USA, 2016. p. 1135–1144. https://doi.org/10.1145/2939672.2939778
- [10] MOLNAR, C. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. 2022. Available at: https://christophm.github.io/interpretable-ml-book/
- [11] LUNDBERG, S. M.; LEE, S.-I. A Unified Approach to Interpreting Model Predictions. In: Advances in Neural Information Processing Systems (NeurIPS 2017), vol. 30. Available at: https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html
- [12] HUMMEL, A. et al. The EU AI Act, Stakeholder Needs, and Explainable AI: Aligning Regulatory Compliance in a Clinical Decision Support System. arXiv preprint arXiv:2505.20311, 2025. Available at: https://arxiv.org/abs/2505.20311
- [13] PEDREGOSA, F. et al. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 2011. Available at: https://jmlr.org/papers/v12/pedregosa11a.html
- [14] SPANHOL, F. A.; OLIVEIRA, L. S.; PETITJEAN, C.; HEUTINCK, L. Breast cancer histopathological image classification using convolutional neural networks. In: 2016 International Joint Conference on Neural Networks (IJCNN). IEEE, 2016, p. 2560–2567. https://doi.org/10.1109/IJCNN.2016.7727519
- [15] WEINSTEIN, J. N. et al. The Cancer Genome Atlas Pan-Cancer analysis project. Nature Genetics, v. 45, n. 10, p. 1113–1120, 2013. https://doi.org/10.1038/ng.2764
- [16] KLUYVER, T. et al. Jupyter Notebooks – a publishing format for reproducible computational workflows. In: Positioning and Power in Academic Publishing: Players, Agents and Agendas. IOS Press, 2016. https://doi.org/10.3233/978-1-61499-649-1-87
- [17] CEDERBLADH, J.; CICCHETTI, A.; SURYADEVARA, J. Early Validation and Verification of System Behaviour in Model-based Systems Engineering: A Systematic Literature Review. ACM Transactions on Software Engineering and Methodology (TOSEM), v. 33, n. 3, 2024, p. 1–67. https://doi.org/10.1145/3631976
- [18] NAVARRO, A.; LAVALLE, A.; MATÉ, A.; TRUJILLO, J. A modeling approach for designing explainable Artificial Intelligence. In: ER2023: Companion Proceedings of the 42nd International Conference on Conceptual Modeling: ER Forum, 7th SCME, CEUR Workshop Proceedings. Available at: https://ceur-ws.org/Vol-3618/forum_paper_24.pdf