Dataset Open Access

Remote Sensing VQA - High Resolution (RSVQA HR)

Sylvain Lobry; Diego Marcos; Jesse Murray; Devis Tuia


DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
  <identifier identifierType="DOI">10.5281/zenodo.6344367</identifier>
  <creators>
    <creator>
      <creatorName>Sylvain Lobry</creatorName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0003-4738-2416</nameIdentifier>
      <affiliation>Université Paris Cité</affiliation>
    </creator>
    <creator>
      <creatorName>Diego Marcos</creatorName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0001-5607-4445</nameIdentifier>
      <affiliation>Wageningen University and Research</affiliation>
    </creator>
    <creator>
      <creatorName>Jesse Murray</creatorName>
      <affiliation>ETH Zurich</affiliation>
    </creator>
    <creator>
      <creatorName>Devis Tuia</creatorName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0003-0374-2459</nameIdentifier>
      <affiliation>Ecole Polytechnique Fédérale de Lausanne</affiliation>
    </creator>
  </creators>
  <titles>
    <title>Remote Sensing VQA - High Resolution (RSVQA HR)</title>
  </titles>
  <publisher>Zenodo</publisher>
  <publicationYear>2022</publicationYear>
  <subjects>
    <subject>Remote Sensing</subject>
    <subject>Visual Question Answering</subject>
  </subjects>
  <dates>
    <date dateType="Issued">2022-03-10</date>
  </dates>
  <language>en</language>
  <resourceType resourceTypeGeneral="Dataset"/>
  <alternateIdentifiers>
    <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/6344367</alternateIdentifier>
  </alternateIdentifiers>
  <relatedIdentifiers>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsDocumentedBy" resourceTypeGeneral="JournalArticle">10.1109/TGRS.2020.2988782</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.6344366</relatedIdentifier>
  </relatedIdentifiers>
  <version>1.0</version>
  <rightsList>
    <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">&lt;p&gt;Remote sensing images contain a wealth of information which can be useful for a wide range of tasks including land cover classification, object counting or detection. However, most of the available methodologies are task-specific, thus inhibiting generic and easy access to the information contained in remote sensing data. As a consequence, accurate remote sensing product generation still requires expert knowledge. With RSVQA, we propose a system to extract information from remote sensing data that is accessible to every user: we use questions formulated in natural language and use them to interact with the images. With the system, images can be queried to obtain high level information specific to the image content or relational dependencies between objects visible in the images. Using an automatic method, we built two datasets (using low and high resolution data) of image/question/answer triplets. The information required to build the questions and answers is queried from OpenStreetMap (OSM). The datasets can be used to train (when using supervised methods) and evaluate models to solve the RSVQA task.&lt;/p&gt;

&lt;p&gt;This page is about the high resolution dataset.&lt;/p&gt;</description>
  </descriptions>
</resource>
445
1,480
views
downloads
All versions This version
Views 445445
Downloads 1,4801,480
Data volume 6.5 TB6.5 TB
Unique views 370370
Unique downloads 317317

Share

Cite as