Dataset Open Access
Sylvain Lobry;
Diego Marcos;
Jesse Murray;
Devis Tuia
<?xml version='1.0' encoding='UTF-8'?> <record xmlns="http://www.loc.gov/MARC21/slim"> <leader>00000nmm##2200000uu#4500</leader> <datafield tag="041" ind1=" " ind2=" "> <subfield code="a">eng</subfield> </datafield> <datafield tag="653" ind1=" " ind2=" "> <subfield code="a">Remote sensing</subfield> </datafield> <datafield tag="653" ind1=" " ind2=" "> <subfield code="a">Sentinel-2</subfield> </datafield> <datafield tag="653" ind1=" " ind2=" "> <subfield code="a">Visual Question Answering</subfield> </datafield> <controlfield tag="005">20220310134925.0</controlfield> <controlfield tag="001">6344334</controlfield> <datafield tag="700" ind1=" " ind2=" "> <subfield code="u">Wageningen University and Research</subfield> <subfield code="0">(orcid)0000-0001-5607-4445</subfield> <subfield code="a">Diego Marcos</subfield> </datafield> <datafield tag="700" ind1=" " ind2=" "> <subfield code="u">ETH Zurich</subfield> <subfield code="a">Jesse Murray</subfield> </datafield> <datafield tag="700" ind1=" " ind2=" "> <subfield code="u">Ecole Polytechnique Fédérale de Lausanne</subfield> <subfield code="0">(orcid)0000-0003-0374-2459</subfield> <subfield code="a">Devis Tuia</subfield> </datafield> <datafield tag="856" ind1="4" ind2=" "> <subfield code="s">9169791</subfield> <subfield code="z">md5:9042f398d25413a10ea530b7b2a94dd2</subfield> <subfield code="u">https://zenodo.org/record/6344334/files/all_answers.json</subfield> </datafield> <datafield tag="856" ind1="4" ind2=" "> <subfield code="s">15270117</subfield> <subfield code="z">md5:fed409776edd11790c596ea0848984c7</subfield> <subfield code="u">https://zenodo.org/record/6344334/files/all_questions.json</subfield> </datafield> <datafield tag="856" ind1="4" ind2=" "> <subfield code="s">95008155</subfield> <subfield code="z">md5:2329258d74d54600628b8652a0e42672</subfield> <subfield code="u">https://zenodo.org/record/6344334/files/Images_LR.zip</subfield> </datafield> <datafield tag="856" ind1="4" ind2=" "> <subfield code="s">1922393</subfield> <subfield code="z">md5:f925d70eb74bb4094966670cb4c2f840</subfield> <subfield code="u">https://zenodo.org/record/6344334/files/LR_split_test_answers.json</subfield> </datafield> <datafield tag="856" ind1="4" ind2=" "> <subfield code="s">123273</subfield> <subfield code="z">md5:4a5ae90a5686bbcffd1d7ec06ddbb692</subfield> <subfield code="u">https://zenodo.org/record/6344334/files/LR_split_test_images.json</subfield> </datafield> <datafield tag="856" ind1="4" ind2=" "> <subfield code="s">2717368</subfield> <subfield code="z">md5:9bddc53d7399a43378f743ec0ff1f95f</subfield> <subfield code="u">https://zenodo.org/record/6344334/files/LR_split_test_questions.json</subfield> </datafield> <datafield tag="856" ind1="4" ind2=" "> <subfield code="s">7428162</subfield> <subfield code="z">md5:a5ff787f9977b0050b9bbf4e32bbb533</subfield> <subfield code="u">https://zenodo.org/record/6344334/files/LR_split_train_answers.json</subfield> </datafield> <datafield tag="856" ind1="4" ind2=" "> <subfield code="s">585112</subfield> <subfield code="z">md5:7d1e7c099c65e39b3e773578cdabe79a</subfield> <subfield code="u">https://zenodo.org/record/6344334/files/LR_split_train_images.json</subfield> </datafield> <datafield tag="856" ind1="4" ind2=" "> <subfield code="s">11940731</subfield> <subfield code="z">md5:935d59a05d126496fe61c541b4ab2d55</subfield> <subfield code="u">https://zenodo.org/record/6344334/files/LR_split_train_questions.json</subfield> </datafield> <datafield tag="856" ind1="4" ind2=" "> <subfield code="s">2690962</subfield> <subfield code="z">md5:61ba49ece26c989f81a9a1e2fe0d475b</subfield> <subfield code="u">https://zenodo.org/record/6344334/files/LR_split_val_answers.json</subfield> </datafield> <datafield tag="856" ind1="4" ind2=" "> <subfield code="s">123258</subfield> <subfield code="z">md5:7a9f267d2cd106025c45a2b68dce5351</subfield> <subfield code="u">https://zenodo.org/record/6344334/files/LR_split_val_images.json</subfield> </datafield> <datafield tag="856" ind1="4" ind2=" "> <subfield code="s">3483748</subfield> <subfield code="z">md5:67b99979ebd468330355d656bf4d6d29</subfield> <subfield code="u">https://zenodo.org/record/6344334/files/LR_split_val_questions.json</subfield> </datafield> <datafield tag="542" ind1=" " ind2=" "> <subfield code="l">open</subfield> </datafield> <datafield tag="260" ind1=" " ind2=" "> <subfield code="c">2022-03-10</subfield> </datafield> <datafield tag="909" ind1="C" ind2="O"> <subfield code="p">openaire_data</subfield> <subfield code="o">oai:zenodo.org:6344334</subfield> </datafield> <datafield tag="100" ind1=" " ind2=" "> <subfield code="u">Université Paris Cité</subfield> <subfield code="0">(orcid)0000-0003-4738-2416</subfield> <subfield code="a">Sylvain Lobry</subfield> </datafield> <datafield tag="245" ind1=" " ind2=" "> <subfield code="a">Remote Sensing VQA - Low Resolution (RSVQA LR)</subfield> </datafield> <datafield tag="540" ind1=" " ind2=" "> <subfield code="u">https://creativecommons.org/licenses/by/4.0/legalcode</subfield> <subfield code="a">Creative Commons Attribution 4.0 International</subfield> </datafield> <datafield tag="650" ind1="1" ind2="7"> <subfield code="a">cc-by</subfield> <subfield code="2">opendefinition.org</subfield> </datafield> <datafield tag="520" ind1=" " ind2=" "> <subfield code="a"><p>Remote sensing images contain a wealth of information which can be useful for a wide range of tasks including land cover classification, object counting or detection. However, most of the available methodologies are task-specific, thus inhibiting generic and easy access to the information contained in remote sensing data. As a consequence, accurate remote sensing product generation still requires expert knowledge. With RSVQA, we propose a system to extract information from remote sensing data that is accessible to every user: we use questions formulated in natural language and use them to interact with the images. With the system, images can be queried to obtain high level information specific to the image content or relational dependencies between objects visible in the images. Using an automatic method, we built two datasets (using low and high resolution data) of image/question/answer triplets. The information required to build the questions and answers is queried from OpenStreetMap (OSM). The datasets can be used to train (when using supervised methods) and evaluate models to solve the RSVQA task.</p> <p>This page concerns the low resolution dataset.</p></subfield> </datafield> <datafield tag="773" ind1=" " ind2=" "> <subfield code="n">doi</subfield> <subfield code="i">isDocumentedBy</subfield> <subfield code="a">10.1109/TGRS.2020.2988782</subfield> </datafield> <datafield tag="773" ind1=" " ind2=" "> <subfield code="n">doi</subfield> <subfield code="i">isVersionOf</subfield> <subfield code="a">10.5281/zenodo.6344333</subfield> </datafield> <datafield tag="024" ind1=" " ind2=" "> <subfield code="a">10.5281/zenodo.6344334</subfield> <subfield code="2">doi</subfield> </datafield> <datafield tag="980" ind1=" " ind2=" "> <subfield code="a">dataset</subfield> </datafield> </record>
All versions | This version | |
---|---|---|
Views | 339 | 339 |
Downloads | 888 | 888 |
Data volume | 14.8 GB | 14.8 GB |
Unique views | 278 | 278 |
Unique downloads | 130 | 130 |