Development of early detection model of building fire based on computer vision

 1. Article information

The article introduced this time is an article about computer vision-based fire detection published by Yonsei University in South Korea in 2023, titled "Development of early fire detection model for buildings using computer vision-based CCTV".

2. Summary

Fires in buildings directly affect the lives of occupants. Therefore, there is a need to develop a safer environmental system to minimize the damage caused by indoor fires.

In recent years, the research on fast fire detection methods based on computer vision aims to use advanced deep learning technology to overcome the limitations of general fire detectors and prevent false alarms. However, there is still a lack of research on the development of indoor fire video detection models and their application in actual test rooms. In this paper, a computer vision-based fire early detection model (EFDM) is developed using an indoor closed-circuit television (CCTV) monitoring system. This method obtains the fire detection time through the actual fire test. The possibility and necessity of using fire detectors indoors is confirmed by comparing with the detection times of general fire detectors. The recall, precision and mAP0.5 performance of the developed model are 0.97, 0.91 and 0.96, respectively. Fires recorded in the indoor fire video test dataset were detected within 8s. The experiment confirmed the possibility of detecting fires from three combustibles according to Underwriters Laboratories standards. The difference between the fire detection time of EFDM and common fire detector is 307 seconds. Usable range is determined by detecting a fire within 1 second of maximum CCTV visibility. The proposed method helps to reduce unfortunate property damage or human casualties caused by fires.

3. Introduction

According to a report from the Fire Services Agency, there were 38,659 fires in South Korea in 2020. Among the total number of fire incidents, fires in buildings and structures accounted for 64.5%, of which 82.2%, 82.6% and 88.7% caused casualties and property losses, respectively. Essentially, the greatest number of casualties and property damage occurred on buildings and structures. Excluding South Korea, China accounted for 39% of the total number of fire incidents reported (2007-2010). Therefore, it is necessary to conduct analysis and implement measures to reduce the number of fire accidents occurring in buildings. In addition, the researchers found that large-scale fires can affect the emotions and risk perception of individuals who have experienced them, underscoring the importance of developing technologies to minimize fire accidents.

One of the ways to minimize the loss caused by fire accidents is to report the fire in the early stage. If the golden fire time of 5 minutes is exceeded, property damage will increase by 3.6 times, and casualties will increase by 1.5 times. These results suggest that timely detection and reporting of fires is a key factor in minimizing property damage and casualties.

The models established in existing studies focus on expressing the performance of the model through various indicators, such as detection rate, accuracy rate, true positive rate, false positive rate, etc. However, it is difficult to determine the early fire detection speed, which is a significant advantage of CCTV-based fire detectors, only by validating the model performance. Few researchers have derived the performance validation and fire detection speed (frames per second (FPS)) of VFD models to quantitatively identify the likelihood of fire detection. There is currently a lack of research on model training performance, rather than on the model's actual fire detection performance (such as fire detection time according to distance and detection time compared to fire detection). The table below summarizes the current research on fire detection using computer vision-based CCTV.

8e67c0e9857496488c5dfa3f79ce901f.png

Most studies only use fire image datasets for outdoors, or do not distinguish between indoors and outdoors. Research on indoor fire detection in buildings is still lacking. Indoor fires cause considerable damage; therefore, research on indoor fire detection is necessary. For learning or validation on early fire datasets, this is an advantage of CCTV fire detection, but it is still understudied. Because fires have a "golden hour," research that can detect fires quickly in their early stages is important. There are not enough studies to quantitatively compare the detection time of the developed model with that of fire detectors used in buildings. This comparison is important to confirm the need for computer vision models using CCTV in indoor environments.

In order to fill the previous knowledge gaps, this study attempts the following aspects: (1) A fire image dataset focusing on indoor fires is collected. (2) Based on the early fires of the collected image dataset, a new early fire detection model (EFDM) is established. (3) Apply the established model to the test chamber to quantitatively evaluate the possibility and necessity of EFDM fire detection time, and compare it with general fire detectors to analyze their performance. The framework studied in this article is shown in the figure below.

ab49a0dc173bb9addfdc56b9bfdd91c0.png

This research proposes a new approach to realize safe building environment systems by utilizing the latest ICT (Information and Communication Technology) to minimize the fire detection time. The results of this study are expected to help reduce unfortunate property damage or loss of life from fires. Additionally, research into combining the functionality of commonly used fire detectors with CCTV fire detectors can help improve the reliability of fire detectors to occupants. At the same time, it is expected to develop from the building level to the community and city level, and eventually become part of the safety net construction technology.

4. Model building

A. Fire and Smoke Dataset

Types and characteristics of image datasets collected for training, and similar image datasets used to prevent false positives. In a recent study on the development of models related to fire detection, the model was trained using existing images of flames. However, one study reported that two-thirds of fire deaths occur in buildings with non-working smoke detectors. So the research developed a model that considers flames and smoke to detect them.

The fire (flame, smoke) images collected for the research are from the data provided by AI HUB. A total of 10163 pieces of data meeting the standards of early fire and indoor fire were selected and used. We use two types of flame and smoke for training, as shown in the figure below.

993c12c441e432a2f40fd1b6724d1e71.png

The article collects a dataset of images, including images that may be mistakenly identified as indoor fires. An insurance company in the United States established the following false alarm test response items for video fire detectors: direct sunlight and sun-related sources, electric welding, black body sources (electric heaters), artificial lighting (incandescent, fluorescent, halogen). The research added indoor candles, rainbows, clothing, flags, red shirts, yellow wires, light reflections, cigarette smoke, etc., commonly used in the training images, as well as false alarm test items. Therefore, the image dataset is combined with an additional image dataset generated indoors, using a dataset of 514 images for training to prevent false positives. The image dataset used in the study was collected through AI HUB and Google Search. The collected image datasets were labeled using RoboFlow.

B. YOLOv5 network

Object detection is the setting of regions for specific objects such as people, cars, buildings or cloth via bounding boxes. This computer vision technique performs automatic recognition. YOLO is an object detection model using deep learning. Here, the image is divided into certain regions, and the "probability of object appearance" of each region is assigned a weight. In addition, it has the advantage of real-time video recognition because it is a unified detection method that can find the location of objects and classify them. This advantage can increase the speed of object recognition. Therefore, since the goal of this research is to detect fires quickly and accurately, YOLO with good real-time performance is chosen. In this study, the YOLO model was used for fire detection using a custom flame and smoke image dataset. So choose YOLOv5s as the pre-training model to start training, which is the smallest and fastest model currently available.

Apply the established model to the video test set for performance evaluation. The indoor fire was verified with two video test setups. The difference between when the fire started and when it was detected is deduced. Furthermore, the presence of false alarms is verified using the indoor images shown in the video test set.

6c62c1a46d8ed9c554fe42ca12a35915.png

The video test set is shown in the table above. The first set of videos is of a bedroom fire caused by putting a hairdryer on the bed and covering it with a blanket. The timecode starts at 0 seconds when the blanket starts to smoke vaguely. The video features lamps, dolls, and red objects that are everyday objects in bedrooms that can be identified as flames; in addition, it includes a mosquito net that can be identified as smoke. The second group included a video of a fire in a living room, where the fire originated from the living room curtains. The fire started, but part of it was hidden behind the sofa. The video shows that the fire started as soon as it started, and the start time node was set to 16 seconds. The video also includes lighting that can be identified as flames, curtains that can be identified as smoke, and similar objects such as skylights.

C. Fire detection experiment

In this study, a case study is conducted to validate indoor EFDM. Case studies are divided into three categories according to the purpose of measurement. The following table lists the three cases.

2ca47c38b34ee50f3582045ad955302d.png

In Case 1, fire detection was confirmed by EFDM. In Case 2, the necessity of indoor fire detectors was confirmed. Case 3 verifies the scope of use of EFDM.

Based on the "video image fire detector" proposed by UL 268 B, the performance of EFDM is verified, and the fire detection capability of EFDM is verified in Case 1. As a video image fire detector, the presence or absence of fire detection is deduced within 4 minutes.

Case 2 adopts the "fire detection and alarm system" standard recommended by ISO 7240 as the test standard for general fire detectors. The detection time of the developed EFDM detector is compared with that of common fire detectors.

Finally, in Case 3, according to the visible range of the network camera used in this study, the fire detection time is deduced to test the usable range of EFDM. Essentially, the possibility of detecting fire at distances matching the performance of webcams was demonstrated.

In this study, the fire experiment was carried out in an indoor environment. The outdoor environment differs from the indoor environment in various factors. For example, there is a difference in air velocity, which was factored into this study. The indoor wind speed is lower than the outdoor wind speed, within 0.15 m/s [45]. Therefore, in this study, the experiments were performed with the space velocity limited to less than 0.15 m/s. Furthermore, all experiments were performed in duplicate and the results were obtained as the average of the values ​​from the two experiments. Measure the time it takes for combustibles to catch fire. Since this study compares EFDM fire detectors with common fire detectors, both video imagery fire detectors and common fire detectors must meet the criteria. The laboratory used in this study is shown in the figure below.

67a746d1092004fb6eeb5c27a031d07f.png

The size of the test chamber is shown in the figure below, which meets all the conditions required for this experiment. The length, width and height of the room are 15 meters, 8 meters and 4 meters respectively.

e11b8138009a54556425179d4952c810.png

The real-time hardware used in this experiment is a Windows 10 computer, the CPU is Intel(R) Core(TM) i7-7700HQ, the base frequency is 2.81 GHz, the memory is 16.0 GB, and the GPU is NVIDIA GeForce GTX 1060. The webcam (PRODEAN SH003) for real-time shooting has a full HD 1080P resolution, 2 million pixels, a frame rate of 30, a 3.6 mm fixed lens, and a 90° viewing angle. CCTV surveillance was conducted using webcams in this study.

Experimental setup for Case 1: First, Underwriters Laboratories suggested testing criteria for video image fire detectors to proceed with Case 1, summarized in the table below. The combustible materials required for the experiment mainly include the following three types: paper fire, wood fire and flammable liquid fire. It is recommended to create a fire under the corresponding conditions according to the combustibles listed in the table below; its performance can only be recognized when the video image fire detector detects it within 4 minutes. In this study, paper, wood, and flammable liquid fires were tested as combustibles according to the three standards recommended by UL 268 B.

556356f6059366c83c627a28f433f37c.png

UL 268B defines the length, width and height of the test chamber as 11 meters, 6.7 meters and 3.0 meters respectively. Additionally, UL 268 does not specify the distance or height between combustibles and webcams. Therefore, consider conducting experiments in a test chamber with adjustable length and height. The height of the webcam is adjusted to be 3 meters above the ground, and the webcam is installed at a distance of 5.5 meters (11m/2) or more from combustibles. The environment in the laboratory is the same as that of Case 1.

Experimental Setup for Case 2: Case 2 is a criterion applied to general fire detectors. Before listing the standards, here are the types of fire detectors. The National Fire Safety Code (NFSC) 203 sets forth standards for the installation of fire detectors in buildings: Smoke detectors should be installed in living rooms used for similar purposes, such as sleeping, lodging and hospitalization.

Photodetectors work on the principle that smoke blocks or reflects light; when smoke enters a room, the light is scattered and identified as a fire. The photodetector used operates at a rate of 10%/m. However, false alarms from smoke detectors can occur in locations managing significant fires, such as kitchens and boilers; heat detectors should be installed in these areas. In this study, we used the most widely used fixed temperature detector among thermal detectors for comparison. Fixed temperature detectors operate when the ambient temperature is above a certain temperature and the nominal operating temperature range is 65°C.

The performances of photodetectors used as smoke detectors and constant temperature detectors used as heat detectors were compared with those of the developed EFDM. The experimental values ​​reported by Choi et al. are for the fire detection time of general thermal smoke detectors. In this experiment, the influence of wind speed on the fire detection time was measured by installing an air-conditioning system and a common fire detector in the test room. By installing 16 fire detectors, the number of fire detections is calculated by distance, as shown in the figure below.

5c98e40cac312d058f2c5b9321a11681.png

In this study, experiments were performed at an air velocity of 0 m/s. As shown in the figure above, detectors (A) and (D) are located 2m and 6m away from combustibles, respectively. However, a distance of 4 meters from the combustible between (a) and (D) could not be observed. Thus, using the data from detector (B) and detector (C), this shows the results for the left and right fire detectors located at the same distance of 4 meters from the combustible. Therefore, the fire detector dataset acquired at a distance of 4m uses the average of two data points at the same distance.

ISO 7240-9 specifies combustible substances for testing general fire detectors. In this study, n-heptane and cotton wicks were used to conduct experiments according to the two standards suggested by ISO 7240-9; the detection results of ordinary fire detectors and EFDM detectors were compared. Combustibles are listed in the table below.

e211902afb4b4673c06f16a7cfe225e1.png

ISO 7240 defines the length, width and height of the test room as 10±1m, 7±1m and 4±0.2m respectively. The environment in the test room is the same as that of Case 2, as shown in Figure 3. A total of 3 web cameras were installed at a distance of 2 meters from the combustibles. Currently, South Korea stipulates that the detection radius of each fire detector shall not exceed 150 square meters; therefore, detection must be performed within a maximum range of 6.9 meters. Therefore, in case 2, the experiments were performed at intervals of 2 m considering the distance of 6 m. The webcam is installed 4 meters above the ground.

Experimental setup for Case 3: In Case 3, the usable range is checked by firing tests at the maximum visible range of the EFDM. Combustibles use flammable liquids as combustibles, and the fire conditions are set according to the combustibles conditions recommended by UL 268B, see Table 4 for details. This combustible was used because it could cover fires of varying sizes and caused the least amount of fire in the experiments.

The recommended viewing range of the webcam used in this experiment is 10-15 meters. Therefore, place the combustible and measure a distance of 15 meters, as shown in the diagram below. In addition, a webcam was installed at a height of 4 m to cover the experimental conditions of Case 1 and Case 2.

76ab07d538c4f9c9c6cab657081b13a5.png

5. Experimental results and analysis

c09647c6dfd8bfc66bc3d7ad9eaa74f3.png    The table above shows the performance of the model developed for validation with recall, precision and mAP0.5 values ​​of 0.93, 0.94 and 0.96, respectively. The recall, precision and mAP0.5 values ​​of the test image dataset are 0.97, 0.91 and 0.96, respectively. Some samples of the test image dataset are shown in the figure below.

21289cb4c561fa340f16a73442b4a6ce.png

A. Detectability Results of Video Image Fire Detectors (Case 1)

1a01d72482494e4e85e3d821818d8f58.png

As shown in the figure above, through the experiment suggested by UL 268 B, determine the early fire detection time and indoor use of EFDM. As mentioned earlier, UL 268 B specifies that video imagery fire detectors must detect a fire within 4 minutes. The following table gives the results of fire detection time for each combustible in case 1.

2bcac41832ed7543c4f03531a8392362.png

Because flammable liquid fires are smaller than other combustible fires, detection of flammable liquid fires is slower than paper or wood fires. The disadvantage of YOLO is that it is difficult to recognize shapes that are too small. As a result, detection times for flammable liquid fires are slower due to the small size of the fires making it difficult to clearly distinguish shapes. No smoke was detected in any of the three combustibles. Considering flammable liquid fires, this is because the smoke cannot be detected visually. Considering paper and wood fires, vague smoke can be visually confirmed at the scene. However, the quality of the webcam was limited, although it showed the fuzzy smoke from the early fires. As a result, combustible smoke, which cannot be seen on CCTV screens, is not captured. Additionally, a dataset of blurry smoke images is trained for detection. However, there has been an increase in false positives, such as mistaking gray or vinyl floors for smoke. This result indicates that the image quality of the webcam used in this study is limited when it comes to detecting blurry smoke.

Based on Test Case 1, fires of all three combustibles proposed in UL 238B were detected within 16 seconds using EFDM. Therefore, the EFDM developed in this study satisfies the "Video Image Detector Test Standard" within 4 minutes with fire detection capability.

Datasets from FireNet and Fismo datasets are used as fire datasets in the experiments. In addition, the article samples and mixes Arnaud Rougetet1's Google Landmark v2 dataset and landscape image dataset to obtain a clean dataset.

In the case of the Flame Plaque Generative Network, we sample 20-dimensional Gaussian noise and feed it into the generator. The size of the image composited by the generator is set to 128x128. The pre-fired image generator network uses a generator with 11 residual blocks that accepts a 6-channel input instead of a 3-channel input. Before applying the attention to the image, we zero the attention to less than 0.2 and increase the attention to greater than 0.7 to 1.0. In addition, the article trains the flame patch generation for 20 epochs, the pre-fire image generation network for 50 epochs and the flame attention network for 4 epochs.

B. EFDM distance detection results and comparison with general fire detectors (case 2)

The distance at which a fire can be detected was measured and the results were compared with those of general fire detectors. The results of real-time fire detection time are shown in the table below. The fire video is shown below.

5310c803dfa243739cacc7e20eaceaf2.png

40f6dca7cf3354d019eabd8edc66dd1b.png

Most flame detections using EFDM are delayed because they are closer, which can be attributed to the angle of the webcam. Most image datasets trained on EFDM are captured at angles of 45◦ or less between the combustible and the webcam. Therefore, it takes slightly longer to identify fires at a distance of 2 m, where the angle is larger than the training image dataset.

47df6fc69908a4421b442547f3262c0b.png

The figure above shows the distance-based fire detection time difference between EFDM and general fire detectors. For readability, EFDM results are shown with rounded decimal places. At 2, 4, and 6 m, the thermal detector was compared with EFDM, and the differences were 99, 194, and 264 s, respectively. Taking EFDM into account, depending on the distance, the fire is detected 99 seconds to 264 seconds faster than with heat detectors. When comparing smoke detectors and EFDMs to smoke, differences of 7, 45 and 306 s emerged at 2, 4 and 6 m, respectively. Essentially, EFDM has been shown to detect fires at least 8 s - 307 s faster than smoke detectors, depending on distance.

C. Fire Availability Results (Case 3)

3fba6900d6bbb7e44c551d271ff905ff.png

The picture above shows the fire detection results within the available range. The average time for EFDM to detect a fire was 0.89 seconds. Furthermore, since the angle of combustibles and webcams is similar to that of the trained fire image dataset, it is detected quickly and without delay compared to other experiments. Therefore, it can be determined that the fire can be detected within a range of 15 meters from the camera, which is the appropriate viewing range of the network camera used in the experiment. Furthermore, fire detection was not significantly affected when the size and distance of the fire was within the visible range. These results suggest that early fires can be detected when they occur in spaces where CCTV is installed to accommodate visibility.

6 Conclusion

The research developed an EFDM using CCTV for rapid fire detection in the case of indoor fires by adding a dataset of 10,000 flame and smoke images and a dataset of 500 false alarm images. The developed indoor EFDM is based on the YOLO model. The indoor fire performance of the built model is verified. First, the recall, precision, and mAP0.5 values ​​are derived from the test image dataset of the developed model, and the detection time is derived from the indoor fire video test set. Second, we conducted experiments in compliance with UL 268 B conditions to confirm the detectability of video image fire detectors in Case 1. Third, a fire test in accordance with ISO 7240-9 conditions was carried out to confirm the necessity of EFDM's indoor fire detectors in Case 2. The number of early fire detections was measured and compared with typical fire detectors. Fourth, measure the fire detection time within the maximum visible range of the camera to determine the maximum fire detection distance of the camera used as CCTV in Case 3.

Attention

Welcome to the WeChat public account "When Traffic Meets Machine Learning"! If you are in the field of rail transit, road traffic, and urban planning like me, you can also add WeChat: Dr_JinleiZhang, note "Join the group", join the traffic big data exchange group! Hope we make progress together!

8fd9882a332026bb09f10bb35112c80f.png

Guess you like

Origin blog.csdn.net/zuiyishihefang/article/details/129106876