FLAME DETECTION FOR VIDEO-BASED EARLY FIRE WARNING SYSTEMS AND 3D VISUALIZATION OF FIRE PROPAGATION

Early and accurate detection and localization of flame is an essential requirement of modern early fire warning systems. Video-based systems can be used for this purpose; however, flame detection remains a challenging issue due to the fact that many natural objects have similar characteristics with fire. In this paper, we present a new algorithm for video based flame detection, which employs various spatio-temporal features such as colour probability, contour irregularity, spatial energy, flickering and spatio-temporal energy. Various background subtraction algorithms are tested and comparative results in terms of computational efficiency and accuracy are presented. Experimental results with two classification methods show that the proposed methodology provides high fire detection rates with a reasonable false alarm ratio. Finally, a 3D visualization tool for the estimation of the fire propagation is outlined and simulation results are presented and discussed.


Introduction
Video-based forest surveillance coupled with image processing techniques for flame and smoke detection is one of the most promising solutions for automatic forest fire detection due to its low cost and short response time.Besides early detection of wildfires, early intervention and efficient fire management is also very important in fire fighting.If fire-fighters had access to fire propagation estimation information, then fire management would be considerably more efficient However, the main disadvantage of early warning systems based on video surveillance is increased false alarm rates due to atmospheric conditions (clouds, shadows, dust particles), light reflections etc [1].
Especially in the case of flame detection, the main challenge that researchers have to face is the chaotic and complex nature of the fire phenomenon and the large variations of flame appearance in video.In [2], Toreyin et al. proposed an algorithm in which flame and fire flicker are detected by analyzing the video in the wavelet domain, while in [3] a hidden Markov model was used to mimic the temporal behaviour of flame.Zhang et al [4] proposed a contour based forest fire detection algorithms using FFT and wavelets, whereas Celik and Demiral [5] presented a rule-based generic colour model for fire-flame pixel classification.More recently, Ko et al [6] used hierarchical Bayesian networks for fire-flame detection and a fire-flame detection method using fuzzy finite automata [7].
Despite the extensive research results listed in literature, video based flame detection remains an open issue.This is due to the fact that many natural objects have similar colours as those of the fire (including the sun, various artificial lights or reflections of them on various surfaces) and can often be mistakenly detected as flames.In this paper, we present a new algorithm for video based flame detection, which employs a set of spatio-temporal features of fire such as colour probability, contour irregularity, spatial energy, flickering and spatio-temporal energy.More specifically, various background subtraction algorithms are investigated and comparative results in terms of computational efficiency and accuracy are presented.In order to accurately model the colour space of fire, a new colour analysis approach using non-parametric modelling is introduced, while the spatial energy of a candidate region is estimated by applying a 2D wavelet analysis only on the red channel of the image.In addition to the detection of the flickering effect, we introduce a new feature, the spatio-temporal energy, in order to further reduce the false alarm rate.The spatio-temporal energy concerns the variance of the spatial energy in a region of the image within a temporal window and aims to identify the irregular changes of the fire's shape.For the discrimination between fire and non-fire regions, two classification methods are investigated (a Suppor Support Vector Machine (SVM) classifier and a rule-based approach) and experimental results with both approaches are presented.Finally, a 3D visualization tool for the estimation of the fire propagation is described and simulation results with different sets of parameters are presented and discussed.
The rest of this paper is organized as follows: In Section 2, the proposed methodology is presented and the different processing steps are described in detail.In Section 3 experimental results with fire and non-fire video sequences are presented, while Section 4 describes the 3D fire propagation visualization too.Finally, conclusions are drawn in Section 5.

Methodology
The proposed methodology initially applies background subtraction in order to detect moving objects and then colour analysis to identify candidate flame regions.Since candidate regions may correspond either to real fire or to fire-coloured objects, further processing is required for discarding false candidates.This processing aims to identify spatio-temporal characteristics, which distinguish fire from fire-coloured objects.As a result, a set of extracted features is generated for each candidate region and the final decision is made by a classifier as shown in Figure 1.

Figure 1. The proposed methodology
The different processing steps of the proposed algorithm are described in detail in the following sub-sections.

Background Subtraction
Thirteen background subtraction methods have been evaluated, both in terms of accuracy (using a test sequence with ground truth data) and speed (which is very important for fire detection applications).More specifically, the following algorithms have been evaluated: BGS 1: Adaptive Median [8], BGS 2: Gaussian Mixture Model [9], BGS 3: Improved Adaptive Gaussian Mixture Model [10], BGS 4: Running Average, BGS 5: Running Gaussian Average [11], BGS 6: Temporal median [12], BGS 7: Eigenbackground [13], BGS 8: Bayes Classification [14], BGS 9: Improved Mixture of Gaussians [15], BGS 10: Adaptive Thresholding [16], BGS 11: Kernel Density Estimation based Background Subtraction [17], BGS 12: Adaptive Background Extraction [18], BGS 13: Frame Differencing.To evaluate the performance of all these background subtraction algorithms, we used a video sequence from the VSSN06 algorithm competition [19].The selected video sequence contains a moving background (trees, bushes etc) and it was recorded by a fixed camera.The ground truth of foreground objects in the scene is available thus facilitating the evaluation procedure.
In Figure 3 the corresponding execution times are shown, while in Figure 2, the sensitivity vs.1-specificity values of all algorithms are plotted, while.The default algorithm parameter values were used.Results show that the Adaptive Median (BGS1) algorithm outperforms the others in terms of computational time, while in terms of accuracy, the Adaptive Median (BGS1), Temporal median (BGS6) and Bayes classification (BGS8) are the most efficient methods.In our experimental results in Section 3 we have used the Adaptive Median (BGS1) method.

Colour Analysis
The second processing step aims to filter out non fire-coloured moving pixels.Only the remaining pixels are considered for blob analysis, thus reducing the required computational time of the whole processing.To filter out non-fire moving pixels, we compare their values with a predefined RGB colour distribution created by a number of pixel-samples from video sequences containing real fires.
Let x 1 , x 2 ,...,x Ν be N fire-coloured samples of the predefined distribution.Using these samples, the probability density function of a pixel x t can be non-parametrically estimated using the kernel K h (Elgammal, 2000) as: Pr( If we choose our kernel estimator function, K h , to be a Gaussian kernel, K h =N(0,S), where S represents the kernel function bandwidth, and we assume a diagonal correlation matrix S with a different kernel bandwidths σ j for the j th colour channel, then the density can be estimated as: Using this probability estimation, the pixel is considered as a fire-coloured pixel if Pr(x t )<th, where the threshold th is a global threshold for all samples of the predefined distribution and can be adjusted to achieve a desired percentage of false positives.Hence, if the pixel has a RGB value which belongs to the distribution of Figure 4(b), then it is considered as a firecoloured pixel as shown in Figure 5.
After the blob analysis step, the colour probability of each candidate blob is estimated by summing the colour probability of each pixel in this blob.

Contour analysis
The shape of flame objects is often irregular thus high irregularity/variability of the blob contours can also be considered as a flame feature.This irregularity is identified by tracing the object contour, starting from any pixel on it.Thus a direction arrow is defined for each pixel on the contour, which can be specified by a label L, where 8 0 < ≤ L , assuming 8-connected pixels, as shown in Figure 6.
The variability of the contour for each pixel can be measured by calculating the difference (distance) between two consecutive directions (from and to the specific pixel), using the following formula, which returns a distance between 0 and 4 for each pixel: .
The average value of this distance function can be used as a measure of the irregularity of the contour.

Spatial Wavelet Analysis
Since there is higher spatial variation in regions containing actual fire compared to fire-coloured objects, the next step aims to detect the spatial variation in a moving fire-coloured blob.To this end, a two-dimensional wavelet is applied on the red channel of the image as shown in Figure 7. where N is the total number of pixels.
For each blob, the spatial wavelet energy is estimated by summing the individual energy of each pixel belonging to the blob.
where N b is the number of pixels in a blob.

Spatio-temporal Analysis
The shape of fire changes irregularly due to the airflow caused by wind or due to the type of burning material.The spatiotemporal analysis step aims to identify these changes in order to discriminate between fire regions and fire-coloured objects.
For this reason, a new feature is extracted considering the variation of the spatial energy in a blob within a temporal window of N frames.The variance of a pixel's spatial energy is estimated as follows: where N is the size of the temporal window, E t is the spatial energy of the pixel in time instance t and E is the average value of its spatial energy.The final spatio-temporal energy mask is shown in Figure 9.For each blob, the total spatio-temporal energy, S blob , is estimated by summing the individual energy of its pixels: where N b is the number of pixels in a blob.

Temporal Processing
Flickering is a very important characteristic of flame and it is very significant for discriminating between"flame" and "nonflame" regions.In our approach, we use a temporal window of N frames (N equals 50 in our experiments), yielding a 1-D temporal sequence of N binary values for each pixel position.Each binary value is set to 0 if the pixel was labeled as "no flame candidate" or 1 if the pixel was labeled as "flame candidate" after the background extraction and colour analysis processing steps.To quantify the effect of flickering, we traverse this temporal sequence for each "flame candidate" pixel and measure the number of transitions between "flame candidate" and "no flame candidate" (0->1).This number can be directly used as a flame flickering feature, with flame regions characterized by a sufficiently large value of flame flickering.

Classification
For the classification of the 5-dimensional feature vectors (colour probability, irregularity of the contour, spatial wavelet energy, spatio-temporal energy, flame flickering) of each blob, we employed a Support Vector Machine (SVM) classifier with RBF kernels.In our experiments, the training of the SVM classifier was based on approximately 500 frames of fire and non-fire video sequences.
In addition to SVM, a second classification approach, which is based on a number of thresholds and rules, was also adopted.More specifically, a threshold th i is empirically defined for each feature i after a number of experiments (Colour probability: th 1 = 0.002, Contour: th 2 =0.8, Spatial wavelet energy: th 3 =100, Temporal energy: th 4 =20, Spatio-temporal variance: th 5 =30,).Then the following classification technique is applied: first, the value of metric C for each feature vector f i is computed by the following equation: ) , ( where F is a function defined as follows: Then, the following rule is applied for each feature vector: ), then the feature vector is classified as a fire, otherwise it is considered as a false alarm i.e. non-fire (in our experiments M=3).

Experimental Results
To evaluate the performance of the proposed method, videos containing fire or fire-coloured objects were used.Figure 10 shows the detection of flame along with the intermediate feature masks (background, colour, spatial wavelet, spatiotemporal and temporal map), while in Figure 11, an example with a video containing a moving fire-coloured object is presented.As it is clear from the intermediate masks, extracted feature values are higher in case of flame detection due to the random behaviour of fire.Fourteen test videos were used for the evaluation of the algorithm.The first seven videos contain actual fires, while the rest contain fire coloured moving objects e.g. car lights, sun reflections etc. Screenshots from these videos are presented in Table 1 (the first column presents fire detection results, while the second column contains screenshots from videos with fire coloured moving objects).
Results are summarized in Figure 12 and Figure 13 in terms of the true positive and true negative ratios.The definition of these terms is given below: True Positive: the number of frames in which fire is correctly detected out of the total number of frames in a fire test video.
True Negative: the number of frames in which no fire was detected out of the total number of frames in a non-fire test video.
Experimental results show that the proposed method provides high detection rates in all videos containing fire, with a reasonable false alarm ratio in videos without fire.In most cases, the SVM classification provides higher true positive rates in videos containing fire, while rule-based classification outperforms in non-fire videos.The lower true negative rates, especially with SVM classification, are shown in "Non_fire_video3" due to the continuous reflections of car lights on the road.However, we believe that the results may be improved in the future with a better training of the SVM classifier.The speed of the proposed method was an average of 13.2 fps, while the size of the video sequences of Table 1 was 320x240.The experiments were performed with a PC that has a Core 2 Duo 2.66 GHz processor.
Table 1 Test videos used for the evaluation of the proposed algorithm.

3D Visualization of Fire Propagation
The proposed flame detection algorithm can be used for the detection and localization of fire by a video-based early warning system.However, the detection of the starting point of fire is just the first step in fire fighting.The next step is efficient fire management.To this end the estimation and visualization of fire propagation is extremely significant since it will enable the fire fighting forces to cope with the fire and manage their resources effectively.Most of the fire propagation simulation software presented in the literature yields a 2D view (mostly a top view) of the fire area, which, however, may not provide a clear view of the situation to the persons responsible for the deployment of fire-fighting forces.In this paper, we present a user-friendly GIS simulator (Figure 14), which provides 2-D/3-D visualization of fire propagation estimation output (ignition time and flame length) and enables interactive selection of some parameters (e.g.ignition point, humidity parameters, wind direction etc).The system is based on Google Earth API [21], which is publicly available and allows the creation of impressive 3-D animations of the fire propagation, in addition to the static views.In this work, fire spread calculations are mainly based on Fire Behavior SDK [22] that implements the popular BEHAVE algorithm [23], instead of FireLib library [24], which was used in a previous work [25].been increased (4m/s) and for this reason the final result is more directional.It is also worth mentioning that the propagation of the fire is halted in the edges of the red (urban) area, where the simulation considers that there is lack of vegetation (fuel).

Conclusion
Early detection of fire is a crucial issue for the suppression of wildfires and the mitigation of disaster.Video based systems for automatic early forest fire detection is a promising technology, which can provide real-time fire detection information with high accuracy.In this paper, we presented a flame detection methodology, which identifies spatio-temporal features of fire such as colour probability, countour irregularity, spatial energy, flickering and spatio-temporal energy.Experimental results with a number of test videos have already shown the great potential of the proposed method.
In the future, we plan to investigate the use of blocks instead of blobs in order to increase the computational efficiency of the flame detection algorithm and to compare our simulation results with real data from past fires.

Figure 2 . 13 Figure 3 .
Figure 2. The execution times of the 13 background extraction algorithms using their default parameters.

Figure 4 .
Figure 4. (a) RGB colour distribution and (b) the colour distribution with a global threshold around each sample.

Figure 6 .
Figure 6.(a) The direction to the next boundary pixel is represented by a code (0 -7) (b) An example of a boundary and the direction of the arrows.

Figure 7 .
Figure 7. Two dimensional spatial wavelet analysis (The original image was downloaded from [20])

Figure 8 .
Figure 8.(a) The energy image corresponding to wavelet subimages of Figure 6 and (b) the initial image along with the detected blobs.

Figure 12 .Figure 13 .
Figure 12.Experimental results with videos containing real fires

Figure 14 .
Figure 14.3D Visualization of Fire Propagation

Figure 15
Figure 15 Simulation results with (a) weak wind and uniform vegetation cover, (b) stronger wind and non-uniform vegetation cover

Figure 15 shows
Figure 15 shows two simulation results with different sets of parameters.More specifically, Figure 15(a) shows a simulation result from a valley area in Thebes, Greece, with uniform vegetation cover (the vegetation layer is activated) and low wind speed values (1m/s).On the other hand, in the second simulation in Figure 15(b) (Prato, Italy), the wind speed has