FireSafetyNet: An Image-Based Dataset with Pretrained Weights for Machine Learning-Driven Fire Safety Inspection
Contributors
Data collectors:
Project leader:
Project member:
Description
This dataset offers a diverse collection of images curated to support the development of computer vision models for detecting and inspecting Fire Safety Equipment (FSE) and related components. Images were collected from a variety of public buildings in Germany, including university buildings, student dormitories, and shopping malls. The dataset consists of self-captured images using mobile cameras, providing a broad range of real-world scenarios for FSE detection.
In the journal paper associated with these image datasets, the open-source dataset FireNet (Boehm et al. 2019) was additionally utilized for training. However, to comply with licensing and distribution regulations, images from FireNet have been excluded from this dataset. Interested users can visit the FireNet repository directly to access and download those images if additional data is required. The provided weights (.pt), however, are trained on the provided self-made images and FireNet using YOLOv8.
The dataset is organized into six sub-datasets, each corresponding to a specific FSE-related machine learning service:
-
Service 1: FSE Detection - This sub-dataset provides the foundation for FSE inspection, focusing on the detection of primary FSE components like fire blankets, fire extinguishers, manual call points, and smoke detectors.
-
Service 2: FSE Marking Detection - Building on the first service, this sub-dataset includes images and annotations for detecting FSE marking signs.
-
Service 3: Condition Check - Modal - This sub-dataset addresses the inspection of FSE condition in a modal manner, focusing on instances where fire extinguishers might be blocked or otherwise non-compliant. This dataset includes semantic segmentation annotations of fire extinguishers. For upload reasons, this set is split into 3_1_FSE Condition Check_modal_train_data (containing training images and annotations) and 3_1_FSE Condition Check_modal_val_data_and_weights (containing validation images, annotations and the best weights).
-
Service 4: Condition Check - Amodal - Extending the modal condition check, this sub-dataset involves amodal detection to identify and infer the state of FSE components even when they are partially obscured. This dataset includes semantic segmentation annotations of fire extinguishers. This dataset includes semantic segmentation annotations of fire extinguishers. For upload reasons, this set is split into 4_1_FSE Condition Check_amodal_train_data (containing training images and annotations) and 4_1_FSE Condition Check_amodal_val_data_and_weights (containing validation images, annotations and the best weights).
-
Service 5: Details Extraction - Inspection Tags - This sub-dataset provides a detailed examination of the inspection tags on fire extinguishers. It includes annotations for extracting semantic information such as the next maintenance date, contributing to a thorough evaluation of FSE maintenance practices.
-
Service 6: Details Extraction - Fire Classes Symbols - The final sub-dataset focuses on identifying fire class symbols on fire extinguishers.
This dataset is intended for researchers and practitioners in the field of computer vision, particularly those engaged in building safety and compliance initiatives.
Files
1_FSE Detection.zip
Files
(7.5 GB)
Name | Size | Download all |
---|---|---|
md5:c4765acd9b7c6e8eb0f72113af766d75
|
379.4 MB | Preview Download |
md5:cbaa5b68f4159763353bfa646f1f9fb0
|
1.8 GB | Preview Download |
md5:d901de07d1862887c47b69e615d848ac
|
1.5 GB | Preview Download |
md5:6744dbeea6246c791f06fdcb43e83a37
|
457.1 MB | Preview Download |
md5:342ffd5e418c1935857b53615f1e023c
|
1.5 GB | Preview Download |
md5:bd5f460e4a3fbde199ecea42e1aa979f
|
457.5 MB | Preview Download |
md5:b6ac04d646cc01c28158d14872b8a3c6
|
233.4 MB | Preview Download |
md5:5ac4e5cf9ee0508616f0a41c807202cc
|
1.2 GB | Preview Download |
Additional details
Related works
- References
- Dataset: https://doi.org/10.5522/04/9137798.v1 (URL)