UPDATE: Zenodo migration postponed to Oct 13 from 06:00-08:00 UTC. Read the announcement.

Dataset Open Access

Learning Aerial Image Segmentation From Online Maps

Kaiser Pascal; Wegner Jan Dirk; Lucchi Aurelien; Jaggi Martin; Hofmann Thomas; Schindler Konrad

This is the CITY-OSM dataset used in the journal publication "Learning Aerial Image Segmentation From Online Maps".

Paper abstract:

This paper deals with semantic segmentation of high-resolution (aerial) images where a semantic class label is assigned to each pixel via supervised classification as a basis for automatic map generation. Recently, deep convolutional neural networks (CNNs) have shown impressive performance and have quickly become the de-facto standard for semantic segmentation, with the added benefit that task-specific feature design is no longer necessary. However, a major downside of deep learning methods is that they are extremely data hungry, thus aggravating the perennial bottleneck of supervised classification, to obtain enough annotated training data. On the other hand, it has been observed that they are rather robust against noise in the training labels. This opens up the intriguing possibility to avoid annotating huge amounts of training data, and instead train the classifier from existing legacy data or crowd-sourced maps that can exhibit high levels of noise. The question addressed in this paper is: can training with large-scale publicly available labels replace a substantial part of the manual labeling effort and still achieve sufficient performance? Such data will inevitably contain a significant portion of errors, but in return virtually unlimited quantities of it are available in larger parts of the world. We adapt a state-of-the-art CNN architecture for semantic segmentation of buildings and roads in aerial images, and compare its performance when using different training data sets, ranging from manually labeled pixel-accurate ground truth of the same city to automatic training data derived from OpenStreetMap data from distant locations. We report our results that indicate that satisfying performance can be obtained with significantly less manual annotation effort, by exploiting noisy large-scale training data.

Files (23.8 GB)
Name Size
2.0 GB Download
5.4 GB Download
11.0 GB Download
472.1 MB Download
1.7 kB Download
9.6 MB Download
4.9 GB Download
Views 16,547
Downloads 42,508
Data volume 197.6 TB
Unique views 14,834
Unique downloads 11,774


Cite as