Published June 15, 2024 | Version v1
Dataset Open

Dataset for the paper: "StegoGAN: Leveraging Steganography for Non-Bijective Image-to-Image Translation", CVPR 2024

  • 1. ROR icon ETH Zurich
  • 2. LASTIG
  • 3. ROR icon Institut national de l'information géographique et forestière
  • 4. ROR icon École nationale des ponts et chaussées

Description

Three benchmark datasets for non-bijective image-to-image translation

PlanIGN

This dataset was constructed from the French National Mapping Agency (IGN), comprising 1900 aerial images (ortho-imagery) at 3m spatial resolution and two versions of their corresponding maps -- one with toponyms and one without toponyms (_TU). We divided them into training (1000 images) and testing (900 images). In our experiment, we use trainA & trainB, testA & testB_TU for training and testing, respectively. 

Google_mismatch

We created non-bijective datasets from the maps dataset by seperating the samples with highways from those without. We excluded all satellite images (trainA) featuring highways and subsampled maps (trainB) with varying proportions of highways from 0% to 65%. For the test set, we selected 898 pairs without highways.

BRATS_mismatch

We used two modalities from Brats2018 -- T1 and FLAIR. We selected transverse slices from the 60° to 100°. Each scan was classified as tumorous if more than 1% of its pixels were labelled as such and as healthy if it contained no tumor pixels. We provide "generate_mismatched_datasets.py" so users can generate datasets with varying proportions of tumorous samples during training. In our default seeting, we have 800 training samples with source images (T1) being healthy and target images (FLAIR) comprising 60% tumorous scans. The test set contains 335 paired scans of healthy brains.

Please cite this paper if you want to use our datasets:

@inproceedings{wu2024stegogan,
  title={StegoGAN: Leveraging Steganography for Non-Bijective Image-to-Image Translation},
  author={Wu, Sidi and Chenn Yizi and Mermet, Samuel and Hurni, Lorenz and Schindler, Konrad and Gonthier, Nicolas and Landrieu, Loic},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2024}
} 

In addition, you should cite the following paper if you use Google_mismatch dataset:

@inproceedings{isola2017image,
  title={Image-to-Image Translation with Conditional Adversarial Networks},
  author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2017}
}

And cite the following papers if you use Brats_mismatch dataset:

@article{menze2014multimodal,
  title={The multimodal brain tumor image segmentation benchmark (BRATS)},
  author={Menze, Bjoern H and Jakab, Andras and Bauer, Stefan and Kalpathy-Cramer, Jayashree and Farahani, 
Keyvan and Kirby, Justin and Burren, Yuliya and Porz, Nicole and Slotboom, Johannes and Wiest, Roland and others}, journal={IEEE transactions on medical imaging}, volume={34}, number={10}, pages={1993--2024}, year={2014} } @article{bakas2017brats17, title={Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features}, author={Bakas, Spyridon and Akbari, Hamed and Sotiras, Aristeidis and Bilello, Michel and Rozycki, Martin and Kirby,
Justin S and Freymann, John B and Farahani, Keyvan and Davatzikos, Christos}, journal={Scientific data}, volume={4}, number={1}, pages={1--13}, year={2017} } @article{bakas2018ibrats17, title={Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge}, author={Bakas, Spyridon and Reyes, Mauricio and Jakab, Andras and Bauer, Stefan and Rempfler, Markus and Crimi,
Alessandro and Shinohara, Russell Takeshi and Berger, Christoph and Ha, Sung Min and Rozycki, Martin and others}, journal={arXiv preprint arXiv:1811.02629}, year={2018} }

 

Files

Benchmark_datasets_CVPR.zip

Files (1.4 GB)

Name Size Download all
md5:dc9dc44c1867c73ca6b28da9ec15083f
1.4 GB Preview Download