There is a newer version of the record available.

Published March 30, 2026 | Version v7
Preprint Open

Documenting AI Systems under the EU AI Act: A UML Architectural Framework with Support for Post-Hoc XAI

Description

Artificial Intelligence (AI) has gained prominence in recent years, with widespread adoption raising challenges related to the auditability of AI-based systems. Explainable Artificial Intelligence (XAI) addresses this issue through post-hoc methods that provide interpretation. However, the integration of XAI methods into architectural representations and compliance-oriented documentation remains largely unstructured. At the same time, the European Union’s AI Act (Regulation 2024/1689) demands documentation requirements for high-risk AI systems without prescribing a standardized format. As a result, compliance material is often complex to produce and maintain, and may not accurately reflect the system implementation. To address this gap, this work proposes a UML architectural framework for AI systems incorporating post-hoc XAI, focusing on the structural representation of compliance-relevant items required by Annex IV. The framework defines a minimal set of Unified Modeling Language (UML) stereotypes, tagged values, and relationships, based on an architectural contract emerging from object-oriented (OO) Python implementations. As an additional contribution, this work introduces the UMLOOModeler, a tool that generates UML class diagrams from these implementations using a conservative extraction strategy, ensuring consistency between the implementation and architectural representations. The framework is illustrated through heterogeneous AI configurations and a partial example of technical documentation, supporting traceability, auditability, and documentation consistency.

Files

Documenting AI Systems under the EU AI Act - A UML Architectural Framework with Support for Post-Hoc XAI - v5.pdf

Additional details

Dates

Created
2025-08-13
First upload.
Updated
2025-09-19
Second upload. Improved the research.
Updated
2025-09-26
Third upload. Fixed some inconsistencies.
Updated
2025-10-25
4th upload. Improved the research.
Updated
2025-11-03
5th upload. Minor changes.
Updated
2026-01-28
6th upload. Improved the research.
Updated
2026-03-30
7th upload. Improved the research.

Software

Repository URL
https://github.com/miklotovx/UMLOOModeler
Programming language
Python
Development Status
Active

References

  • [1] EUROPEAN UNION. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (AI Act). Official Journal of the European Union, L 1689, 12 July 2024, p. 1–147. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
  • [2] MITCHELL, M.; WU, S.; ZALDIVAR, A.; BARNES, P.; VASSERMAN, L.; HUTCHINSON, B.; SPITZER, E.; RAGHU, M.; GEBRU, T. Model Cards for Model Reporting. In: Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). New York, NY, USA: ACM, 2019. p. 220–229. https://doi.org/10.1145/3287560.3287596
  • [3] BARREDO ARRIETA, A. et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, v. 58, p. 82–115, 2020. https://doi.org/10.1016/j.inffus.2019.12.012
  • [4] SAARELA, M.; PODGORELEC, V. Recent Applications of Explainable AI (XAI): A Systematic Literature Review. Applied Sciences, 14(19):8884, 2024. https://doi.org/10.3390/app14198884
  • [5] SCANTAMBURLO, T.; FALCARIN, P.; VENERI, A.; FABRIS, A.; GALLESE, C.; BILLA, V.; ROTOLO, F.; MARCUZZI, F. Software Systems Compliance with the AI Act: Lessons Learned from an International Challenge. In: Proceedings of the 2nd International Workshop on Responsible AI Engineering (RAIE '24). Lisbon, Portugal: ACM, 2024. p. 1–8. https://doi.org/10.1145/3643691.3648589
  • [6] SELIC, B. A systematic approach to domain-specific language design using UML. In: 10th IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing (ISORC), 2007. p. 2–9. https://doi.org/10.1109/ISORC.2007.10
  • [7] OBJECT MANAGEMENT GROUP (OMG). OMG Unified Modeling Language (UML), Version 2.5. 2015. Available at: https://www.omg.org/spec/UML/2.5
  • [8] BRAMBILLA, M.; CABOT, J.; WIMMER, M. Model-Driven Software Engineering in Practice. 2nd ed. Morgan & Claypool Publishers, 2017.
  • [9] MOLNAR, C. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. 2022. Available at: https://christophm.github.io/interpretable-ml-book/
  • [10] RIBEIRO, M. T.; SINGH, S.; GUESTRIN, C. "Why should I trust you?": Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, New York, NY, USA, 2016. p. 1135–1144. https://doi.org/10.1145/2939672.2939778
  • [11] LUNDBERG, S. M.; LEE, S.-I. A Unified Approach to Interpreting Model Predictions. In: Advances in Neural Information Processing Systems (NeurIPS 2017), vol. 30. Available at: https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html
  • [12] PEDREGOSA, F. et al. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 2011. Available at: https://jmlr.org/papers/v12/pedregosa11a.html
  • [13] SPANHOL, F. A.; OLIVEIRA, L. S.; PETITJEAN, C.; HEUTINCK, L. Breast cancer histopathological image classification using convolutional neural networks. In: 2016 International Joint Conference on Neural Networks (IJCNN). IEEE, 2016, p. 2560–2567. https://doi.org/10.1109/IJCNN.2016.7727519
  • [14] WEINSTEIN, J. N. et al. The Cancer Genome Atlas Pan-Cancer analysis project. Nature Genetics, v. 45, n. 10, p. 1113–1120, 2013. https://doi.org/10.1038/ng.2764
  • [15] KLUYVER, T. et al. Jupyter Notebooks – a publishing format for reproducible computational workflows. In: Positioning and Power in Academic Publishing: Players, Agents and Agendas. IOS Press, 2016. https://doi.org/10.3233/978-1-61499-649-1-87
  • [16] HUMMEL, A. et al. The EU AI Act, Stakeholder Needs, and Explainable AI: Aligning Regulatory Compliance in a Clinical Decision Support System. arXiv preprint arXiv:2505.20311v3, 2026. Available at: https://arxiv.org/abs/2505.20311v3
  • [17] NAVARRO, A.; LAVALLE, A.; MATÉ, A.; TRUJILLO, J. A modeling approach for designing explainable Artificial Intelligence. In: ER2023: Companion Proceedings of the 42nd International Conference on Conceptual Modeling: ER Forum, 7th SCME, CEUR Workshop Proceedings. Available at: https://ceur-ws.org/Vol-3618/forum_paper_24.pdf
  • [18] GONÇALVES, D.; CORREIA, A. XAI-Compliance-by-Design: A Modular Framework for GDPR- and AI Act-Aligned Decision Transparency in High-Risk AI Systems. Journal of Cybersecurity and Privacy, v. 6, 2026, Art. 43.
  • [19] LUCAJ, L.; LOOSLEY, A.; JONSSON, H.; GASSER, U.; VAN DER SMAGT, P. TechOps: Technical documentation Templates for the AI Act. arXiv preprint arXiv:2508.08804, 2025. Available at: https://arxiv.org/abs/2508.08804
  • [20] CEDERBLADH, J.; CICCHETTI, A.; SURYADEVARA, J. Early Validation and Verification of System Behaviour in Model-based Systems Engineering: A Systematic Literature Review. ACM Transactions on Software Engineering and Methodology (TOSEM), v. 33, n. 3, 2024, p. 1–67. https://doi.org/10.1145/3631976