Documenting Post-Hoc XAI Systems: An Initial UML Approach for EU AI Act Compliance
Authors/Creators
Description
Artificial Intelligence (AI) has gained prominence in recent years, being widely applied in both academic and industrial contexts. Its popularization has raised several challenges, particularly the need to make AI models auditable. Explainable Artificial Intelligence (XAI) seeks to address this issue through methods that interpret the decisions of black-box models. Despite its progress, few studies integrate XAI into the software engineering cycle. At the same time, the European Union’s AI Act (Regulation 2024/1689) requires extensive documentation for high-risk systems, often resulting in hundreds of pages of reports. To bridge this gap, this work proposes An Initial UML Approach for EU AI Act Compliance, which unifies UML, XAI, and regulatory documentation practices. The approach introduces stereotypes, tagged values, and relationships for LIME, SHAP, ICE, and Ceteris-based explanations. By graphically representing critical XAI elements, it enhances traceability and auditability while providing partial coverage of the compliance requirements, serving as a structured complement to the mandatory textual documentation. The proposal is illustrated through a case study involving a breast cancer diagnosis system.
Files
An Initial UML Approach for EU AI Act Compliance - v3.1.pdf
Files
(481.6 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:ea0e1d86ec9266dea793313efd15ab24
|
481.6 kB | Preview Download |
Additional details
Dates
- Created
-
2025-08-13First upload.
- Updated
-
2025-09-19Second upload. Improved the research.
- Updated
-
2025-09-26Third upload. Fixed some inconsistencies.
- Updated
-
2025-10-254th upload. Improved the research.
- Updated
-
2025-11-035th upload. Minor changes.
Software
- Repository URL
- https://github.com/miklotovx/diag_system_en
- Programming language
- Python
- Development Status
- Active
References
- [1] EUROPEAN UNION. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (AI Act). Official Journal of the European Union, L 1689, 12 July 2024, p. 1–147. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
- [2] RIBEIRO, M. T.; SINGH, S.; GUESTRIN, C. "Why should I trust you?": Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, New York, NY, USA, 2016. p. 1135–1144. https://doi.org/10.1145/2939672.2939778
- [3] BARREDO ARRIETA, A. et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, v. 58, p. 82–115, 2020. https://doi.org/10.1016/j.inffus.2019.12.012
- [4] HOLZINGER, A. et al. Causability and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 9(4):e1312, 2019. https://doi.org/10.1002/widm.1312
- [5] GUNNING, D.; AHA, D. W. DARPA's Explainable Artificial Intelligence Program. AI Magazine, v. 40, n. 2, p. 44–58, 2019. https://doi.org/10.1609/aimag.v40i2.2850
- [6] SAARELA, M.; PODGORELEC, V. Recent Applications of Explainable AI (XAI): A Systematic Literature Review. Applied Sciences, 14(19):8884, 2024. https://doi.org/10.3390/app14198884
- [7] GUIDOTTI, R. et al. A survey of methods for explaining black box models. ACM Computing Surveys, v. 51, n. 5, p. 1–42, 2018. https://doi.org/10.1145/3236009
- [8] BHATI, D. et al. A Survey on Post-Hoc Explanation Methods for XAI Visualization. arXiv preprint arXiv:2501.17189, 2025. Available at: https://www.techrxiv.org/users/866757/articles/1262953-a-survey-on-post-hoc-explanation-methods-for-xai-visualization
- [9] BILAL, A.; EBERT, D.; LIN, B. LLMs for Explainable AI: A Comprehensive Survey. arXiv preprint arXiv:2504.00125, 2025. Available at: https://arxiv.org/abs/2504.00125
- [10] MOLNAR, C. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. 2022. Available at: https://christophm.github.io/interpretable-ml-book/
- [11] GILPIN, L. H.; BAU, D.; YUAN, B. Z.; BAJWA, A.; SPECTER, M.; KAGAL, L. Explaining explanations: An overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA). IEEE, 2018, pp. 80–89. https://doi.org/10.1109/DSAA.2018.00018
- [12] MUSTAC, T.; HENSE, P. SoS AI: An AI System of Systems Approach for EU AI Act Conformity. SSRN Electronic Journal, 2025. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5280073
- [13] HUMMEL, A. et al. The EU AI Act, Stakeholder Needs, and Explainable AI: Aligning Regulatory Compliance in a Clinical Decision Support System. arXiv preprint arXiv:2505.20311, 2025. Available at: https://arxiv.org/abs/2505.20311
- [14] SEIFI, S. et al. Complying with the EU AI Act: Innovations in Explainable and User-Centric Hand Gesture Recognition. arXiv preprint arXiv:2503.15528, 2025. Available at: https://arxiv.org/abs/2503.15528
- [15] OBJECT MANAGEMENT GROUP (OMG). OMG Unified Modeling Language (UML), Version 2.5. 2015. Available at: https://www.omg.org/spec/UML/2.5
- [16] SELIC, B. A systematic approach to domain-specific language design using UML. In: 10th IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing (ISORC), 2007. p. 2–9. https://doi.org/10.1109/ISORC.2007.10
- [17] NAVARRO, A.; LAVALLE, A.; MATÉ, A.; TRUJILLO, J. A modeling approach for designing explainable Artificial Intelligence. In: ER2023: Companion Proceedings of the 42nd International Conference on Conceptual Modeling: ER Forum, 7th SCME, CEUR Workshop Proceedings. Available at: https://ceur-ws.org/Vol-3618/forum_paper_24.pdf
- [18] LUNDBERG, S. M.; LEE, S.-I. A Unified Approach to Interpreting Model Predictions. In: Advances in Neural Information Processing Systems (NeurIPS 2017), vol. 30. Available at: https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html
- [19] GOLDSTEIN, A.; KAPELNER, A.; BLEICH, J.; PITKIN, E. Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional Expectation. Journal of Computational and Graphical Statistics, v. 24, 2015. https://doi.org/10.1080/10618600.2014.907095
- [20] KUŹBA, M.; BARANOWSKA, E.; BIECKEK, P. pyCeterisParibus: explaining Machine Learning models with Ceteris Paribus Profiles in Python. Journal of Open Source Software, v. 4, n. 37, p. 1389, 2019. https://doi.org/10.21105/joss.01389
- [21] SHARMA, A.; KUMAR, D. Classification with 2‑D Convolutional Neural Networks for Breast Cancer Diagnosis. Scientific Reports, 12:21857, 2022. https://doi.org/10.1038/s41598-022-26378-6
- [22] PEDREGOSA, F. et al. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 2011. Available at: https://jmlr.org/papers/v12/pedregosa11a.html
- [23] KLUYVER, T. et al. Jupyter Notebooks – a publishing format for reproducible computational workflows. In: Positioning and Power in Academic Publishing: Players, Agents and Agendas. IOS Press, 2016. https://doi.org/10.3233/978-1-61499-649-1-87
- [24] LOGILAB. Pyreverse – UML Diagrams for Python. Available at: https://pypi.org/project/pylint/
- [25] CEDERBLADH, J.; CICCHETTI, A.; SURYADEVARA, J. Early Validation and Verification of System Behaviour in Model-based Systems Engineering: A Systematic Literature Review. ACM Transactions on Software Engineering and Methodology (TOSEM), v. 33, n. 3, 2024, p. 1–67. https://doi.org/10.1145/3631976
- [26] GOOGLE CLOUD ARCHITECTURE CENTER. MLOps: Continuous delivery and automation pipelines in machine learning. 2020. Available at: https://cloud.google.com/architecture/mlops-continuous-delivery-and-automation-pipelines-in-machine-learning
- [27] EUROPEAN COMMISSION. Assessment List for Trustworthy Artificial Intelligence (ALTAI) for Policy Makers. High-Level Expert Group on Artificial Intelligence, 2020/2025 update. Available at: https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment
- [28] STAMPERNAS, S.; LAMBRINOUDAKIS, C. A Framework for Compliance with Regulation (EU) 2024/1689 for Small and Medium-Sized Enterprises. Journal of Cybersecurity and Privacy, 2025. https://doi.org/10.3390/jcp5030040
- [29] AHMAD, N.; ISMAIL, O. Ethical and Governance Frameworks for Artificial Intelligence: A Systematic Literature Review. International Journal of Interactive Mobile Technologies (iJIM), 2025. https://doi.org/10.3991/ijim.v19i14.56981
- [30] ZAROUR, M.; ALZABUT, H.; AL-SARAYREH, K. MLOps best practices, challenges and maturity models: A systematic literature review. Information and Software Technology, 2025. https://doi.org/10.1016/j.infsof.2025.107733
- [31] HASSAN, S. U. et al. Local interpretable model-agnostic explanation approach for medical imaging analysis: A systematic literature review. Computers in Biology and Medicine, 2025. https://doi.org/10.1016/j.compbiomed.2024.109569
- [32] MATHEW, D.E., EBEM, D.U., IKEGWU, A.C. et al. Recent Emerging Techniques in Explainable Artificial Intelligence to Enhance the Interpretable and Understanding of AI Models for Human. Neural Process Lett 57, 16 (2025). https://doi.org/10.1007/s11063-025-11732-2