TITLE OF THE ANALYTICAL PAPER: THE "BLACK BOX" PROBLEM IN ARTIFICIAL INTELLIGENCE AND THE RIGHT TO EXPLANATION OF DECISIONS
Description
This article examines one of the most pressing issues in modern law and information technology—the “black box” phenomenon in artificial intelligence systems and the associated right of individuals to obtain explanations of automated decisions.
The methodological basis of the study is a comparative legal analysis of international approaches to regulating the transparency of algorithmic systems, including the General Data Protection Regulation (GDPR) of the European Union, the EU Artificial Intelligence Act of 2024, as well as the national legislation of the Republic of Uzbekistan, in particular Presidential Decree No. UP-189 dated October 22, 2025.
The results of the study indicate the need for a comprehensive approach that combines legal, technical, and ethical tools to ensure algorithmic transparency.
Files
53-60 Erkinbaeva JLE.pdf
Files
(939.7 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:8f73ce640be23f71d1011b7c247ecc4b
|
939.7 kB | Preview Download |
Additional details
References
- 1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) Official Journal of the European Union. — L series. — 2024.
- 2. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data (GDPR) Official Journal of the European Union. — L 119. — 2016.
- 3. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#
- 4. Указ Президента Республики Узбекистан от 22.10.2025 г. № УП-189 «О дополнительных мерах по дальнейшему развитию технологий искусственного интеллекта» // Национальная база данных законодательства. — 27.10.2025. —
- 5. Постановление Президента Республики Узбекистан от 14.10.2024 г. № ПП-358 «Об утверждении Стратегии развития технологий искусственного интеллекта до 2030 года».
- 6. Этические правила создания, внедрения и использования решений на основе искусственного интеллекта (утв. приказом Министра цифровых технологий, рег. № 3787 от 14.03.2026 г.).
- 7. Указ Президента Республики Узбекистан от 21.08.2025 г. № УП-140 «О дополнительных мерах по повышению уровня доступа к правосудию путём внедрения технологий ИИ в деятельность судов».
- 8. Goodman B., Flaxman S. European Union Regulations on Algorithmic Decision-Making and a «Right to Explanation» AI Magazine. — 2017. — Vol. 38. — No. 3.
- 9. Wachter S., Mittelstadt B., Floridi L. Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation International Data Privacy Law. — 2017. — Vol. 7. — No. 2.
- 10. Wachter S., Mittelstadt B., Russell C. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law & Technology. — 2018. — Vol. 31. — No. 2.
- 11. Edwards L., Veale M. Slave to the Algorithm? Why a «Right to an Explanation» Is Probably Not the Remedy You Are Looking For Duke Law & Technology Review. — 2017. — Vol. 16.
- 12. Kaminski M.E. The Right to Explanation, Explained Berkeley Technology Law Journal. — 2019. — Vol. 34. — No. 1.
- 13. Brkan M. Do Algorithms Rule the World? Algorithmic Decision-Making in the Framework of the GDPR and Beyond. — 2017.
- 14. Ribeiro M.T., Singh S., Guestrin C. «Why Should I Trust You?»: Explaining the Predictions of Any Classifier Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. — 2016.
- 15. Lundberg S., Lee S.-I. A Unified Approach to Interpreting Model Predictions Advances in Neural Information Processing Systems. — 2017.
- 16. Salih A. et al. A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME Advanced Intelligent Systems. — 2025. — Vol. 7. — 2400304.
- 17. Panigutti C. et al. The Role of Explainable AI in the Context of the AI Act Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. — 2023.
- 18. Papadimitriou E. The Right to Explanation in the Processing of Personal Data with the Use of AI Systems International Journal of Law in Changing World. — 2023.
- 19. Buruiană (Rusu) A. Black Box AI and the Sovereignty of Personal Data: Between GDPR and Digital Ethics European Journal of Law and Public Administration. — 2025.
- 20. Vorras A., Mitrou L. Unboxing the Black Box of Artificial Intelligence: Algorithmic Transparency and/or a Right to Functional Explainability EU Internet Law in the Digital Single Market. — Cham: Springer, 2021.
- 21. Angwin J. et al. Machine Bias: There's Software Used Across the Country to Predict Future Criminals. And It's Biased Against Blacks ProPublica. — 2016. — 23 May.
- 22. Dastin J. Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women Reuters. — 2018. — 10 October.
- 23. Obermeyer Z. et al. Dissecting racial bias in an algorithm used to manage the health of populations Science. — 2019. — Vol. 366. — No. 6464.
- 24. OECD Recommendation on Artificial Intelligence. OECD/LEGAL/0449. — Paris: OECD Publishing, 2019.
- 25. High-Level Expert Group on AI. Ethics Guidelines for Trustworthy AI. — Brussels: European Commission, 2019.
- 26. Molnar C. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. — 2nd ed. — Lulu.com, 2022.
- 27. Gunning D. et al. XAI — Explainable Artificial Intelligence Science Robotics. — 2019. — Vol. 4. — eaay7120.
- 28. Dwivedi R. et al. Explainable AI (XAI): Core Ideas, Techniques, and Solutions ACM Computing Surveys.2023.
- 29. Linardatos P., Papastefanopoulos V., Kotsiantis S. Explainable AI: A Review of Machine Learning Interpretability Methods Entropy. 2020.
- 30. Chaudhary G. Unveiling the Black Box: Bringing Algorithmic Transparency to AI Masaryk University Journal of Law and Technology. 2024.