There is a newer version of the record available.

Published November 23, 2025 | Version v6
Report Open

White Paper on Advancing Trusted Research Environments for Healthcare AI

Description

This white paper explores the evolution of Trusted Research Environments (TREs) as Europe implements the European Health Data Space (EHDS) Regulation, which mandates that all secondary use of health data must occur within secure processing environments. TREs are now central to health research, ensuring data confidentiality and legal compliance by offering controlled platforms for sensitive data analysis. However, existing TREs fall short when supporting advanced analytics, particularly for AI development, due to limited tools for handling big data, integrating complex workflows, and managing the secure release of AI models. These priorities are situated within a broader expectation that AI in healthcare must be grounded in transparent, accountable and human-centred practices. TREs must operate within a quadruple-helix constellation involving public authorities, industry partners, researchers, clinicians and civil society, including patients whose experiences inform the direction of healthcare AI. To support a level playing field, TREs must give these groups access to shared evidence and consistent insight into how models are developed, validated and monitored.

The paper identifies three critical areas for advancing TRE capabilities: (1) real-time AI oversight and explainability, (2) establishing shared validation toolkits (“Innovation Commons”) to standardise legal, ethical, and technical checks, and (3) enabling secure model transfer and ongoing post-deployment monitoring. Drawing on the Swedish TRE4HealthAI project, it highlights the need for enhanced automation, robust risk assessment during model training, and clear frameworks for validation and deployment. These areas must be addressed within institutional processes that recognise the importance of human participation, shared responsibility and documented decision pathways. The emergence of practices such as JUST Data (judicious, unbiased, safe and transparent) offers the procedural discipline needed for high-quality data documentation and annotation, giving researchers, clinicians, regulators and developers a shared foundation for understanding how interpretative choices shape model behaviour. Rather than a ‘humans-in-the-loop’ approach, which involves humans primarily as adjuncts to automated systems, this approach affirms an ‘AI-in-the-loop’ model in which human judgement remains central and AI plays a supporting role. This supports clearer reasoning, consistent communication and more reproducible scientific practice.

Addressing these gaps is essential to meet the EHDS’s regulatory requirements and enable trustworthy, data-driven innovation in healthcare. To support these expectations, the paper outlines a federated architecture for next-generation TREs that remains interoperable across HealthData@EU while providing the procedural clarity required for responsible collaboration. This includes the use of Transfer IP frameworks that define the rights and obligations of partners when models, code or datasets move between contributors. It also includes Pre-Flight processes, which establish structured conditions in which partners can test assumptions, examine edge cases and confirm workflow readiness before operational deployment. We contrast Sweden’s federated, decentralised approach with Finland’s centralised model for secure processing, using TRE4HealthAI and Swedish initiatives such as VAI-B as EHDS-aligned demonstrators of how next-generation TRE capabilities can operate without new state-owned repositories. This comparative lens clarifies the boundary conditions for SPEs/TREs in Sweden and highlights how shared validation, export ‘airlocks’ and real-time oversight can be implemented in such an architecture.

The paper concludes with recommendations to modernise TREs with next-generation features, and to build infrastructures that are technically robust, ethically sound, and fully compliant with European data protection standards. This includes the integration of socio-technical practices that support explainability, reproducibility and informed human oversight. By combining regulatory alignment, technical development and structured collaboration, TREs can provide the basis for a trustworthy environment in which healthcare AI is safe, effective and capable of supporting high-impact clinical and research outcomes.

Files

2025-11-23 White Paper on Advancing TREs for Health AI - FINAL v6.pdf

Files (1.3 MB)