Published July 29, 2024 | Version v1
Conference paper Open

MMEarth: Exploring Multi-Modal Pretext Tasks For Geospatial Representation Learning

  • 1. ROR icon University of Copenhagen
  • 2. Københavns Universitet

Description

The volume of unlabelled Earth observation (EO) data is huge, but many important applications lack labelled training data. However, EO data offers the unique opportunity to pair data from different modalities and sensors automatically based on geographic location and time, at virtually no human labor cost. We seize this opportunity to create a diverse multi-modal pretraining dataset at global scale. Using this new corpus of 1.2 million locations, we propose a Multi-Pretext Masked Autoencoder (MP-MAE) approach to learn general-purpose representations for optical satellite images. Our approach builds on the ConvNeXt V2 architecture, a fully convolutional masked autoencoder (MAE). Drawing upon a suite of multi-modal pretext tasks, we demonstrate that our MP-MAE approach outperforms both MAEs pretrained on ImageNet and MAEs pretrained on domain-specific satellite images. This is shown on several downstream tasks including image classification and semantic segmentation. We find that pretraining with multi-modal pretext tasks notably improves the linear probing performance compared to pretraining on optical satellite images only. This also leads to better label efficiency and parameter efficiency which are crucial aspects in global scale applications.

Files

2405.02771v2.pdf

Files (1.8 MB)

Name Size Download all
md5:a0fd1fb3a6d6cc1beefb335c3b44b155
1.8 MB Preview Download

Additional details

Funding

UK Research and Innovation
ELIAS: European Lighthouse of AI for Sustainability 10080425