Segmentation and deconvolution prediction models for the MEDUSSA pipeline. From the article "Deep-learning-based deconvolution and segmentation of fluorescent membranes for high precision bacterial cell size profiling".
Authors/Creators
Contributors
Contact persons:
Researcher:
Description
Models for segmentation of membranes in fluorescent images and prediction of deconvolved membranes from either membranes or cytoplasmic fluorescence.
Deconvolution prediction models
For the FM2FM, FM2FM-HiSNR, and FP2FM models, you'll need the csbdeep and TensorFlow Python libraries installed in a conda software environment. The models must first be downloaded and unzipped. Then can be loaded this way in, for example, a Jupyter notebook:
from csbdeep.models import CARE
from csbdeep.utils import normalize
model = CARE(config=None, name=$MODEL_NAME$, basedir=$MODEL_DIRECTORY$) prediction = model.predict(normalize(image),axes='YX',n_tiles=(4,4))
FM2FM is a CARE-trained TensorFlow model (doi:10.1038/s41592-018-0216-7) for the prediction of deconvolved membranes from non-deconvolved membrane images
FM2FM-HiSNR is a CARE-trained TensorFlow model (doi:10.1038/s41592-018-0216-7) for the prediction of high Signal-to-Noise Ratio (SNR) deconvolved membranes from non-deconvolved membrane images of either low or high SNRs
FP2FM is a CARE-trained TensorFlow model for the prediction of deconvolved membranes from non-deconvolved cytoplasmic fluorescence images
Instance segmentation models
We provide different fine-tuned segmentation models for four different libraries: Omnipose (doi:10.1038/s41592-022-01639-4), Cellpose3 (doi:10.1038/s41592-025-02595-5), microSAM (doi:10.1038/s41592-024-02580-4) and Cellpose-SAM (doi:10.1101/2025.04.28.651001), both for deconvolved and raw images. It's recommended that each library is installed in a separate environment using tools like conda or pixi. For Omnipose, Cellpose3, and Cellpose-SAM, models can be lodaded through their GUI interfaces in the "Models" tab, clicking "Add custom torch model to GUI" option and then navigate to the downloaded and unzipped model. For microSAM, in their documentation they provide instructions on how to load custom models in their napari plugin or Jupyter notebooks.