Published May 11, 2023 | Version 1.1
Dataset Open

BurnedAreaUAV Dataset (v1.1)

  • 1. CIIC - Polytechnic of Leiria
  • 2. Institute of Electronics and Informatics Engineering (IEETA)

Description

General Description

A manually annotated dataset, consisting of the video frames and segmentation masks, for segmentation of forest fire burned area based on a video captured by a UAV. A detailed explanation of the dataset generation is available in the open-access article "Burned area semantic segmentation: A novel dataset and evaluation using convolutional networks".

Data Collection 

The BurnedAreaUAV dataset derives from a video captured at the coordinates' latitude 41° 23' 37.56" and longitude -7° 37' 0.32", at Torre do Pinhão, in northern Portugal in an area characterized by shrubby to herbaceous vegetation. The video was captured during the evolution of a prescribed fire using a DJI Phantom 4 PRO UAV equipped with an FC6310S RGB camera.

Video Overview

The video captures a prescribed fire where the burned area increases progressively. At the beginning of the sequence, a significant portion of the UAV's sensor field of view is already burned, and the burned area expands as time goes by. The video was collected by an RGB sensor installed on a drone while keeping the drone in a nearly static stationary stance during the data collection duration. 

The video has about 15 minutes and a frame rate of 25 frames per second, amounting to 22500 images. It was collected by an RGB sensor installed on a drone while keeping the drone in a nearly static stationary stance during the data collection duration. Throughout this period, the progression of the burned area is observed. The original video has a resolution of 720×1280 and is stored in H.264 (or MPEG-4 Part 10) format. No audio signal was collected.

Manual Annotation

The annotation was done every 100 frames, which corresponds to a sampling period of 4 seconds. Two classes are considered: burned_area and unburned_area. This annotation has been done for the entire length of the video. The training set consists of 226 frame-image pairs and the test set of 23. The training and test annotations are offset by 50 frames. 

We plan to expand this dataset in the future.


File Organization (BurnedAreaUAV_v1.rar)

The data is available in PNG, JSON (Labelme format), and WKT (segmentation masks only). The raw video data is also made available. 

Concomitantly, photos were taken that allow to obtain metadata about the position of the drone, including height and coordinates, the orientation of the drone and the camera, and others. The geographic data regarding the location of the controlled fire are represented in a KML file that Google Earth and other geospatial software can read. We also provide two high-resolution orthophotos of the area of interest before and after burning.

The data produced by the segmentation models developed in "Burned area semantic segmentation: A novel dataset and evaluation using convolutional networks", comprising outputs in PNG and WKT formats, is also readily available upon request

 

BurnedAreaUAV_dataset_v1.rar 
        MP4_video (folder)
             -- original_prescribed_burn_video.mp4

        PNG (folder)
              train (folder)
                frames (folder)
                 -- frame_000000.png (raster image)
                 -- frame_000100.png
                 -- frame_000200.png
                               
               msks (folder)
                 -- mask_000000.png
                 -- mask_000100.png
                 -- mask_000200.png                
                                          

            test (folder)
               frames (folder)
                 -- frame_020250.png
                 -- frame_020350.png
                 -- frame_020350.png
                               
               msks (folder)
                 -- mask_020250.png
                 -- mask_020350.png
                 -- mask_020350.png      
                               
        JSON (folder)
        -- train_valid_json (folder)
            -- frame_000000.json (Labelme format)
            -- frame_000100.json
            -- frame_000200.json
            -- frame_000300.json
                    
        -- test_json (folder)
            -- frame_020250.json
            -- frame_020350.json
            -- frame_020450.json
                   
        WKT_files (folder)
            -- train_valid.wkt (list of masks polygons)
            -- test.wkt

UAV photos (metadata)
        -- uav_photo1_metadata.JPG
        -- uav_photo2_metadata.JPG

High resolution ortophoto files
        -- odm_orthophoto_afterBurning.png
        -- odm_orthophoto_beforeBurning.png

Keyhole Markup Language file (area under study polygon)
        -- pinhao_cell_precribed_area.kml

Acknowledgements

This dataset results from activities developed in the context of partially projects funded by FCT - Fundação para a Ciência e a Tecnologia, I.P., through projects MIT-EXPL/ACC/0057/2021 and UIDB/04524/2020, and under the Scientific Employment Stimulus - Institutional Call - CEECINST/00051/2018.

 

The source code is available here.

Files

odm_orthophoto_afterBurning.png

Files (3.0 GB)

Name Size Download all
md5:dcbed0ebef1fa095076a53c92a7b2938
910.7 MB Download
md5:4f457e5191626dae10772a16c791552d
1.0 GB Preview Download
md5:48796833e97ab36ea18219d8680d1f89
1.0 GB Preview Download
md5:391e11c538e80ac7d7c86b052c6626ea
7.5 kB Download
md5:10bca11e8ba253fda89155a80a3c8a71
4.0 MB Preview Download
md5:ee979bfdc8c38d1100d8abb2311fb8a8
3.9 MB Preview Download

Additional details

Related works

Is published in
Journal article: 10.1016/j.isprsjprs.2023.07.002 (DOI)

Funding

UIDB/04524/2020 – Research Center in Informatics and Communications UIDB/04524/2020
Fundação para a Ciência e Tecnologia
MIT-EXPL/ACC/0057/2021 – Spatiotemporal Data Models and Algorithms for Earth and Environmental Sciences MIT-EXPL/ACC/0057/2021
Fundação para a Ciência e Tecnologia
CEECINST/00051/2018/CP1566/CT0001 – Not available CEECINST/00051/2018/CP1566/CT0001
Fundação para a Ciência e Tecnologia