Published October 15, 2021 | Version v1
Dataset Open

LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation

  • 1. Wuhan University

Description

The benchmark code is available at: https://github.com/Junjue-Wang/LoveDA

Highlights: 

  1. 5987 high spatial resolution (0.3 m) remote sensing images from Nanjing, Changzhou, and Wuhan
  2. Focus on different geographical environments between Urban and Rural
  3. Advance both semantic segmentation and domain adaptation tasks
  4. Three considerable challenges: multi-scale objects, complex background samples, and inconsistent class distributions

Reference:

@inproceedings{wang2021loveda,
  title={Love{DA}: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation},
  author={Junjue Wang and Zhuo Zheng and Ailong Ma and Xiaoyan Lu and Yanfei Zhong},
  booktitle={Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks},
  editor = {J. Vanschoren and S. Yeung},
  year={2021},
  volume = {1},
  pages = {},
  url={https://datasets-benchmarks proceedings.neurips.cc/paper/2021/file/4e732ced3463d06de0ca9a15b6153677-Paper-round2.pdf}
}

License:

The owners of the data and of the copyright on the data are RSIDEA, Wuhan University. Use of the Google Earth images must respect the "Google Earth" terms of use. All images and their associated annotations in LoveDA can be used for academic purposes only, but any commercial use is prohibited. (CC BY-NC-SA 4.0)

Files

Datasheet.pdf

Files (9.6 GB)

Name Size Download all
md5:7e0a6434f78c240cbfb18afab87404aa
828.0 kB Preview Download
md5:a489be0090465e01fb067795d24e6b47
3.1 GB Preview Download
md5:de2b196043ed9b4af1690b3f9a7d558f
4.0 GB Preview Download
md5:84cae2577468ff0b5386758bb386d31d
2.4 GB Preview Download

Additional details