Published March 12, 2026 | Version v2
Dataset Open

Segmentation and deconvolution prediction models for the MEDUSSA pipeline. From the article "Deep-learning-based deconvolution and segmentation of fluorescent membranes for high precision bacterial cell size profiling".

  • 1. ROR icon Max Planck Institute for Evolutionary Biology
  • 2. Queen Mary, University of London
  • 1. EDMO icon Max Planck Institute for Evolutionary Biology

Description

Models for segmentation of membranes in fluorescent images and prediction of deconvolved membranes from either membranes or cytoplasmic fluorescence. 

Deconvolution prediction models

For the FM2FM, FM2FM-HiSNR, and FP2FM models, you'll need the csbdeep and TensorFlow Python libraries installed in a conda software environment. The models must first be downloaded and unzipped. Then can be loaded this way in, for example, a Jupyter notebook:

from csbdeep.models import CARE
from csbdeep.utils import normalize
model = CARE(config=None, name=$MODEL_NAME$, basedir=$MODEL_DIRECTORY$) prediction = model.predict(normalize(image),axes='YX',n_tiles=(4,4))

FM2FM is a CARE-trained TensorFlow model (doi:10.1038/s41592-018-0216-7) for the prediction of deconvolved membranes from non-deconvolved membrane images

FM2FM-HiSNR is a CARE-trained TensorFlow model (doi:10.1038/s41592-018-0216-7) for the prediction of high Signal-to-Noise Ratio (SNR) deconvolved membranes from non-deconvolved membrane images of either low or high SNRs

FP2FM is a CARE-trained TensorFlow model for the prediction of deconvolved membranes from non-deconvolved cytoplasmic fluorescence images

Instance segmentation models

We provide different fine-tuned segmentation models for four different libraries: Omnipose (doi:10.1038/s41592-022-01639-4), Cellpose3 (doi:10.1038/s41592-025-02595-5), microSAM (doi:10.1038/s41592-024-02580-4) and Cellpose-SAM (doi:10.1101/2025.04.28.651001), both for deconvolved and raw images. It's recommended that each library is installed in a separate environment using tools like conda or pixi. For Omnipose, Cellpose3, and Cellpose-SAM, models can be lodaded through their GUI interfaces in the "Models" tab, clicking "Add custom torch model to GUI" option and then navigate to the downloaded and unzipped model. For microSAM, in their documentation they provide instructions on how to load custom models in their napari plugin or Jupyter notebooks. 

Files

CARE_models.zip

Files (4.9 GB)

Name Size Download all
md5:4fe8cfa9b45446cb69721986319bdafa
231.5 MB Preview Download
md5:60a84258719b4e0f9a43b5c0c91cb2be
2.3 GB Preview Download
md5:36f6147857ca8de5a9214b9fde88d6a5
2.3 GB Preview Download

Additional details