Deliverable 2.3 - Final Report and Guidelines of IDEA project
Authors/Creators
Contributors
Description
The report follows a logical structure that moves from empirical research to theoretical and regulatory analysis, and finally to practical recommendations. It first discusses the opportunities and challenges that arise from the introduction of artificial intelligence in judicial systems and defines the main research questions related to the technological readiness of courts and to the acceptance of AI among legal professionals.
Then, it examines the main AI initiatives currently developed in Europe. Using data from the Council of Europe’s CEPEJ Resource Centre on Cyberjustice and AI, it maps the level of adoption of AI technologies and describes how they are applied in judicial environments.
Following, it reviews applications in case management and legal research, as well as more advanced systems related to decision support and predictive analytics. Examples such as the JuLIA and FACILEX projects, together with several Online Dispute Resolution initiatives, illustrate concrete experiences and good practices that can guide future developments.
Ethical and regulatory aspects are the focus of Section 6. Starting from the CEPEJ European Ethical Charter on AI in Judicial Systems, the section examines how these principles have influenced subsequent regulatory frameworks, in particular the EU AI Act, which identifies judicial AI systems as “high-risk.” Also taking into account the results of the empirical analysis described in annex A and B, four main categories of risk are discussed: algorithmic bias and discrimination; opacity and lack of accountability (the so-called “black box” issue); risks to human oversight and judicial independence, including automation bias and AI hallucinations; and challenges linked to governance fragmentation and the absence of standardization. Each risk is analysed through available empirical evidence, and possible mitigation strategies are proposed, combining technical instruments with organizational and regulatory measures.
The report then translates the previous analyses (including the empirical analysis of stakeholders’ engagement) into a set of practical guidelines for the responsible use of AI in the justice sector. The guidelines are grouped into six thematic clusters: legal and ethical foundations, governance and accountability mechanisms, operational safeguards, technical and security requirements, development standards and capacity building, and institutional learning. For each cluster, the report offers implementation suggestions, relevant examples, and direct connections to the risks identified in earlier sections, ensuring a consistent link between analysis and practice.
Finally, the report summarizes the main conclusions and reflects on the future perspective of AI in European judicial systems. It stresses the importance of keeping a human- centred approach, where AI supports but does not replace professional judgment. By combining empirical evidence, regulatory discussion, and practical guidance, the report provides a comprehensive and coherent reference framework for the responsible integration of AI into European justice institutions.
These concluding considerations are grounded in a robust empirical basis. Indeed, the core empirical foundation of this report derives from extensive stakeholder engagement activities conducted across Belgium, Croatia, Czech Republic, Estonia, Italy, and Lithuania. These activities comprised focus groups and in-depth interviews with judges, lawyers, court staff, ICT specialists, and policymakers, exploring their experiences with digital justice tools, training adequacy, attitudes toward AI and predictive justice, and perceptions of associated risks. The detailed findings from these empirical activities are presented in two annexes:
• Annex A presents comprehensive results from focus groups and interviews with legal professionals (judges, lawyers, court staff, and ICT experts), organized thematically around: perception of technological development in the judiciary; level of training provided; attitude toward AI and predictive justice; and perception of AI-related risks and mitigation measures. The description of this analysis’ results is anticipated by a discussion regarding the methodology utilized.
• Annex B complements the analysis with perspectives from policymakers, including senior judges, ministry officials, and judicial administrators, whose institutional roles provide broader insights into governance issues, policy priorities, and resource constraints.
These empirical findings inform and validate the analysis presented throughout the main body of the report, providing concrete evidence of both opportunities and challenges in AI deployment.
Technical info (English)
The Report is the result of the joint and indivisible work of the authors.
If, however, individual responsibility must be assigned for academic purposes: Giampiero Lupo authored Sections 1 and 2 and Annexes A and B plus the revision of the all document; Alessandro Sbarro authored Sections 3 through 7 and Annex C; Marco Giacalone and Marianna Molinari revised the entire work, ensuring its overall coherence, also in light of the data concerning the use of technologies within the justice systems of the respective countries studied, provided by the following contributing researchers: Marko Stilinović, Tatjana Josipović, Esra Palit, Thomas Hoffmann, Milda Markevičiūtė, Rimantas Simaitis, Andrej Krištofík, Pavel Loutocký, Alessio Bigi, Gina Gioia.
Files
IDEA Project Deliverable D2.3.pdf
Files
(2.1 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:71eca29607553326f6dfc2fe8d4da97a
|
2.1 MB | Preview Download |