XDCycleGAN VC Datatset
Creators
- 1. Stony Brook University
- 2. Memorial Sloan Kettering
Description
This is the public dataset that is used for training XDCycleGAN, a deep learning model for unpaired image-to-image translation. TrainA contains the optical colonoscopy images (OC). TrainB contains the virtual colonoscopy (VC) images. The model trained on this dataset is also included here.
Abstract:
Colorectal cancer screening modalities, such as optical colonoscopy (OC) and virtual colonoscopy (VC), are critical for diagnosing and ultimately removing polyps (precursors for colon cancer). The non-invasive VC is normally used to inspect a 3D reconstructed colon (from computed tomography scans) for polyps and if found, the OC procedure is performed to physically traverse the colon via endoscope and remove these polyps. In this paper, we present a deep learning framework, Extended and Directional CycleGAN, for lossy unpaired image-to-image translation between OC and VC to augment OC video sequences with scale-consistent depth information from VC and VC with patient-specific textures, color and specular highlights from OC (e.g. for realistic polyp synthesis). Both OC and VC contain structural information, but it is obscured in OC by additional patient-specific texture and specular highlights, hence making the translation from OC to VC lossy. The existing CycleGAN approaches do not handle lossy transformations. To address this shortcoming, we introduce an extended cycle consistency loss, which compares the geometric structures from OC in the VC domain. This loss removes the need for the CycleGAN to embed OC information in the VC domain. To handle a stronger removal of the textures and lighting, a Directional Discriminator is introduced to differentiate the direction of translation (by creating paired information for the discriminator), as opposed to the standard CycleGAN which is direction-agnostic. Combining the extended cycle consistency loss and the Directional Discriminator, we show state-of-the-art results on scale-consistent depth inference for phantom, textured VC and for real polyp and normal colon video sequences. We also present results for realistic pendunculated and flat polyp synthesis from bumps introduced in 3D VC models. You can find the code and additional detail about Foldit via our Computation Endoscopy Platform at https://github.com/nadeemlab/CEP
Please cite the following papers when using this dataset.
The OC data came from the Hyper Kvasir datset:
Borgli, Hanna, et al. "HyperKvasir, a comprehensive multi-class image and video dataset for gastrointestinal endoscopy." Scientific data 7.1 (2020): 1-14.
The VC and fold annotation data came from the following TCIA dataset:
Smith K, Clark K, Bennett W, Nolan T, Kirby J, Wolfsberger M, Moulton J, Vendt B, Freymann J. (2015). Data From CT_COLONOGRAPHY. The Cancer Imaging Archive. https://doi.org/10.7937/K9/TCIA.2015.NWTESAY1
Files
OC_VC.zip
Files
(277.0 MB)
Name | Size | Download all |
---|---|---|
md5:736302380d1f4cd0cd7ebf862064b09e
|
144.9 MB | Preview Download |
md5:49c83f776a9b71620f31a0b01286dca2
|
132.1 MB | Preview Download |