Published April 28, 2026 | Version v1
Other Open

Fast, Low-resource, Accurate, Robust, and Effectual Medical Image Segmentation and Classification in 3D CT and MRI Scans

  • 1. ROR icon University of Toronto

Description

Medical imaging plays a central role in disease diagnosis, prognosis, and treatment planning. Although deep learning has substantially advanced medical image analysis, developing general-purpose models that are fast, resource efficient, accurate, and robust across diverse clinical scenarios remains a major challenge. Addressing these limitations is crucial for improving global healthcare delivery, particularly in resource constrained environments. To this end, we have organized a series of community-driven FLARE challenges aimed at accelerating progress in generalizable medical image segmentation.

Since 2021, the FLARE challenges have progressively expanded in clinical scope, data scale, and technical complexity:

  • FLARE 2021: segmentation of 4 abdominal organs in 511 CT scans
  • FLARE 2022: segmentation of 13 abdominal organs in 2,300 CT scans
  • FLARE 2023: segmentation of 13 abdominal organs and pan-cancer lesions in 4,500 CT scans FLARE 2024: pan-cancer segmentation, laptop based CT organ segmentation, and unsupervised domain adaptation for MRI, using 10,000+ CT and 4,000+ MRI scans
  • FLARE 2025: pan-cancer segmentation, laptop based CT organ segmentation, unsupervised domain adaptation for MRI and PET, and development of foundation and multimodal models using 10,000+ CT and 10,000+ MRI scans

Leading algorithms from these challenges can now segment 13 organs and multiple abdominal lesion types within 10 seconds on full resolution 3D CT volumes containing over one million voxels. These advances significantly improve segmentation accuracy, computational efficiency, and cross domain generalization. Building on this trajectory, FLARE 2026 aims to address critical clinical needs and emerging challenges in developing foundation level and multimodal AI systems for medical imaging. The challenge focuses on three subtasks:

  • Pan-cancer segmentation: develop robust lesion segmentation across multiple disease types in CT scans.
  • Self-configured multi-task learning: develop self-configuring models capable of joint 3D medical image segmentation and classification that adapt automatically to diverse datasets.
  • Multimodal medical image parsing: generalist vision–language models that support classification, detection, counting, measurement, and regression across multiple imaging modalities.

The anticipated impact of FLARE 2026 is to catalyze innovations in resource-efficient, generalizable, and multimodal AI systems, establishing new performance benchmarks in medical image analysis. Clinically, advancements from this challenge will directly enhance patient care by enabling accurate, reliable, and widely accessible imaging solutions, helping bridge diagnostic disparities between high resource and low resource healthcare environments.

Files

327-Fast_Low-resource_Accurate_Robust_and_Effectual_Medical_Image_2026-04-22T16-37-14.pdf