Published October 15, 2021
| Version v1
Dataset
Open
LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation
Description
The benchmark code is available at: https://github.com/Junjue-Wang/LoveDA
Highlights:
- 5987 high spatial resolution (0.3 m) remote sensing images from Nanjing, Changzhou, and Wuhan
- Focus on different geographical environments between Urban and Rural
- Advance both semantic segmentation and domain adaptation tasks
- Three considerable challenges: multi-scale objects, complex background samples, and inconsistent class distributions
Reference:
@inproceedings{wang2021loveda,
title={Love{DA}: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation},
author={Junjue Wang and Zhuo Zheng and Ailong Ma and Xiaoyan Lu and Yanfei Zhong},
booktitle={Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks},
editor = {J. Vanschoren and S. Yeung},
year={2021},
volume = {1},
pages = {},
url={https://datasets-benchmarks proceedings.neurips.cc/paper/2021/file/4e732ced3463d06de0ca9a15b6153677-Paper-round2.pdf}
}
License:
The owners of the data and of the copyright on the data are RSIDEA, Wuhan University. Use of the Google Earth images must respect the "Google Earth" terms of use. All images and their associated annotations in LoveDA can be used for academic purposes only, but any commercial use is prohibited. (CC BY-NC-SA 4.0)