Self-supervised learning for 3D light-sheet microscopy image segmentation
Creators
- 1. Helmholtz Munich, Germany
- 2. Imperial College London, UK
- 3. Ludwig Maximilian University of Munich, Germany
- 4. University of Zurich, Switzerland
- 5. Technical University of Munich
Description
In the realm of modern biological research, the ability to visualize and understand complex structures within tissues and organisms is crucial. Traditional imaging methods often face challenges in providing detailed, 3D views without compromising sample integrity. Light-sheet microscopy (LSM) after tissue clearing and specific structure staining overcomes these limitations, making it an efficient, high contrast, and ultra-high resolution method for visualizing a wide array of biological structures in diverse samples, such as cellular and subcellular structures, organelles and processes.
In the structure staining step, various dyes, fluorophores, or antibodies can be employed to selectively label specific biological structures within samples and enhance their contrast under microscopy. In the tissue clearing step, while preserving sample integrity and fluorescence of labeled structures, inherently opaque biological samples are rendered transparent, allowing light to penetrate deeper into the tissue. Integrating with structure staining and tissue clearing steps, LSM provides researchers with unprecedented capabilities to visualize intricate biological structures with high spatial resolution, offering new insights into various biomedical research fields such as neuroscience, immunology, oncology and cardiology.
To analyze LSM images in different biomedical research fields, segmentation plays a pivotal and essential role in identifying and distinguishing different biological structures. For very small-scale LSM images, image segmentation can be done manually. However, in whole-organ or body LSM cases, manual segmentation is time-intensive, single images can have 10000^3 voxel, hence automatic segmentation methods are highly demanded. Recent strides in deep learning-based segmentation methods offer promising solutions for automated segmentation of LSM images. Although these methods reached segmentation performances comparable to expert human annotators, their success largely relies on supervised learning from extensive training sets of manually annotated images which are specific to one kind of structure staining. However, large-scale annotation for diverse LSM image segmentation tasks poses a great challenge.
Self-supervised learning proves advantageous in this context, as it allows deep learning models to pretrain on large-scale, unannotated datasets, learning useful and general representations of LSM image data. Subsequently, the model can be fine-tuned on a smaller labeled dataset for specific segmentation tasks. Notably, self-supervised learning has not been extensively explored within the LSM field, despite the presence of vast sets of LSM data of different biological structures. Some of the properties of LSM images e.g. the high signal-to-noise-ratio makes the data specifically well suited for self-supervised learning.
In this challenge, we aim to host an inaugural MICCAI challenge on self-supervised learning for 3D LSM image segmentation, encouraging the community to develop self-supervised learning methods for general segmentation of various structures in 3D LSM images. With an effective self-supervised learning method, extensive 3D LSM images with no annotations can be leveraged to pretrain segmentation models. This encourages models to capture high-level representations that are generalizable across different biological structures. Subsequently, the pretrained models can be finetuned on substantially smaller annotated datasets, thereby significantly minimizing the annotation efforts in various 3D LSM segmentation applications.
Each participant will receive a training dataset comprising two sets. The first set includes a large (> 6x10^11 voxels, equivalent to > 35000 images of 256x256x256 voxels) of whole-brain 3D LSM images of both mice and human samples without annotations, facilitating model pretraining through self-supervised learning. This dataset will be one of the largest datasets ever provided to a MICCAI challenge. The second set consists of cropped patches from whole-brain (of humans and mice) 3D LSM images with precise annotations, enabling the fine-tuning of the model for segmentation tasks.
Files
Self-supervised learning for 3D light-sheet micros.pdf
Files
(110.2 kB)
Name | Size | Download all |
---|---|---|
md5:24db9c3c4f21816cbd916824c8a288a0
|
110.2 kB | Preview Download |