FoldIt Public Dataset
Creators
- 1. Stony Brook University
- 2. Memorial Sloan Kettering
Description
This is the public dataset that is used for training FoldIt, a deep learning model for haustral fold detection and segmentation. TrainA contains the optical colonoscopy images (OC). TrainB contains the haustral fold annotations overlayed on the virtual colonoscopy (VC) images. Lastly, TrainC contains the VC images. The model trained on this dataset is also included here.
Abstract:
Haustral folds are colon wall protrusions implicated for high polyp miss rate during optical colonoscopy procedures. If segmented accurately, haustral folds can allow for better estimation of missed surface and can also serve as valuable landmarks for registering pre-treatment virtual (CT) and optical colonoscopies, to guide navigation towards the anomalies found in pre-treatment scans. We present a novel generative adversarial network, FoldIt, for feature-consistent image translation of optical colonoscopy videos to virtual colonoscopy renderings with haustral fold overlays. A new transitive loss is introduced in order to leverage ground truth information between haustral fold annotations and virtual colonoscopy renderings. We demonstrate the effectiveness of our model on real challenging optical colonoscopy videos as well as on textured virtual colonoscopy videos with clinician-verified haustral fold annotations. In essence, the FoldIt model is a method for translating between domains when a shared common domain is available. We use the FoldIt model to learn a translation from optical colonoscopy to haustral fold annotation via a common virtual colonoscopy domain. You can find the code and additional detail about Foldit via our Computation Endoscopy Platform at https://github.com/nadeemlab/CEP
Please cite the following papers when using this dataset.
The OC data came from the Hyper Kvasir datset:
Borgli, Hanna, et al. "HyperKvasir, a comprehensive multi-class image and video dataset for gastrointestinal endoscopy." Scientific data 7.1 (2020): 1-14.
The VC and fold annotation data came from the following TCIA dataset:
Smith K, Clark K, Bennett W, Nolan T, Kirby J, Wolfsberger M, Moulton J, Vendt B, Freymann J. (2015). Data From CT_COLONOGRAPHY. The Cancer Imaging Archive. https://doi.org/10.7937/K9/TCIA.2015.NWTESAY1
Files
foldit_model_public.zip
Files
(511.4 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:799c6f2a6be520a61597d08c0bcae365
|
263.9 MB | Preview Download |
|
md5:7689bf4ccafb16345f4d3741c962a40c
|
247.6 MB | Preview Download |