Published May 4, 2023 | Version v2
Dataset Open

Echo from noise: synthetically generated cardiac ultrasound data using semantic diffusion models

  • 1. King's College London
  • 2. Ultromics, King's College London

Description

This is the data repository for the paper: "Echo from noise: synthetic ultrasound image generation using diffusion models for real image segmentation", available at: https://arxiv.org/abs/2305.05424. The corresponding code is available at: https://github.com/david-stojanovski/echo_from_noise

 

This is the first work to utilize Denoising Diffusion Probabilistic Models (DDPMs) for generating medical images using semantic label maps as a source image for conditioning the generated image.

Each of the 400+50 CAMUS patients contributes with 4 labelled frames (ED and ES for 2 chamber and 4 chamber), totalling 1800 initial semantic maps, to which we added the sector label. These semantic maps then had five random deformations applied (a combination of random affine and elastic deformation) to produce, 9000 transformed semantic maps (8000 for training and 1000 for validation). 

Affine transformation ranges for rotation degrees, translate, scale and shear were: (-5, 5), (0, 0.05), (0.8, 1.05) and 5 respectively. This was implemented using the torchvision python package. Elastic deformation was implemented using the TorchIO package. The settings for number of control points and max displacement were (10, 10, 4) and (0, 30, 30) respectively.

Using these 9000 semantic maps as input to the generative models, we produced 9000 synthetic ultrasound images.

Each echo view folder contains 3 folders:

1) annotations: augmented labels, with no sector label and no clipping due to sector

2) images: semantic diffusion model inferenced images

3) sector_annotations: label maps which contain ultrasound cone sector, which were used to generate corresponding semantic diffusion model images

ema_0.9999_050000_2ch_ed_256.pt and ema_0.9999_050000_4ch_ed_256.pt are the saved checkpoints for the 2 and 4 chamber diffusion models respectively.

The pretrained segmentation networks are provided within the final_models.zip file.

A diagram of image numbers is shown in Data diagram.png

Notes

Funding sources: 1) Wellcome/EPSRC Centre for Medical Engineering [WT203148/Z/16/Z] 2) Wellcome Trust Senior Research Fellowship [209450/Z/17/Z] This work was also supported by the Centre for Doctoral Studies in Surgical & Interventional Engineering at King's College London.

Files

Data diagram.png

Files (6.5 GB)

Name Size Download all
md5:a636d1c45f4b9245f5db4e505c3cc0f3
185.1 kB Preview Download
md5:bad9828d8e88cc8c0ce6a29b3dc92d10
2.6 GB Download
md5:a12ded1b94ecfd10a996af6d89587a45
2.6 GB Download
md5:0f96a29aa2614f943ade95bb42cc290d
12.4 MB Preview Download
md5:f43061c3c4811aec068fcde0e55a8ce2
1.3 GB Preview Download