Planned intervention: On Thursday 19/09 between 05:30-06:30 (UTC), Zenodo will be unavailable because of a scheduled upgrade in our storage cluster.
Published June 21, 2021 | Version v1
Journal article Open

Multitask 3D CBCT-to-CT Translation and Organs-at-Risk Segmentation Using Physics-Based Data Augmentation

  • 1. Georgia Institute of Technology
  • 2. Memorial Sloan Kettering Cancer Center
  • 3. Peking University Cancer Hospital

Description

Weekly cone-beam computed tomography (CBCT) images are primarily used for patient setup during radiotherapy. To quantify CBCT images, we present a 3D multitask deep learning model for simultaneous CBCT-to-CT translation and organs-at-risk (OARs) segmentation driven by a novel physics-based artifact/noise-induction data augmentation pipeline. The data augmentation technique creates multiple paired/registered synthetic CBCTs corresponding to a single planning CT which in turn can be used to translate real weekly CBCTs to better quality CTs while performing OAR segmentation using the high-quality planning CT contours. Given the resultant perfectly-paired CBCT and planning CT/contours data, we use supervised conditional generative adversarial network as the base model which, unlike CycleGAN -- prevalent in CBCT-to-CT translation literature -- and other unsupervised image-to-image translation methods, does not hallucinate or produce randomized outputs. We also use a large 95 patient lung cancer dataset with planning CT and weekly CBCTs.

Notes

Trained models corresponding to experiments reported in table 1 in the paper (https://arxiv.org/abs/2103.05690) are provided as an attachment.

Files

checkpoints.zip

Files (2.7 GB)

Name Size Download all
md5:d5c91ce99efa98ed9de339186b62ee8b
2.7 GB Preview Download