top
logologologo

Relevants Links

Chat Online Users

  • No users online

Login


Firesense Demos
Demos PDF Print E-mail

 

Demo categories

 

FIRESENSE Animation

    FIRESENSE aims to develop an automatic early warning system to remotely monitor areas of archaeological and cultural interest from the risk of fire and extreme weather conditions. The FIRESENSE system takes advantage of recent advances in multi-sensor surveillance technologies, using a wireless sensor network capable of monitoring different modalities (e.g. temperature, humidity), and optical and infrared cameras, as well as local weather stations on the deployment sites. The signals collected from these sensors will be transmitted to a monitoring center, which will employ intelligent computer vision and pattern recognition algorithms as well as data fusion techniques to automatically analyze sensor information. It will be capable of generating automatic warning signals for local authorities whenever a dangerous situation arises.

     

     

    Video-Based Flame Detection

    • Flame Detection (BILKENT algorithm)

    BILKENT developed a covariance matrix based fire detection method for video sequences. The algorithm divides the video into spatiotemporal blocks and uses covariance-based features extracted from these blocks to detect fire. Colour, spatial and temporal features are extracted and are used for classification. Feature vectors take advantage of both the spatial and the temporal characteristics of flame coloured regions. The extracted features are trained and tested using a support vector machine (SVM) classifier.

    Unlike other algorithms used for similar tasks, the proposed method does not use background subtraction that requires a stationary camera for moving flame regions detection and therefore can be used with moving cameras. This is an important advantage of the proposed technique because cameras may sway in the wind or a CCTV camera can slowly pan an area of interest for possible fire and flames. The system can work at real time (20fps) when the video frames have a resolution of 320x200 pixels.

     

     

    The following demo presents flame detections under night conditions. The fire took place in Mugla, Turkey (August 2011).

     

     

    • Flame Detection (CERTH algorithm)

    CERTH-ITI developed a video based flame detection algorithm, which initially applies background subtraction and colour analysis to identify candidate flame regions on the image and subsequently distinguishes between fire and non-fire objects based on a set of five extracted features. The extracted features are derived after blob analysis and include colour probability, spatial wavelet energy, temporal energy, spatiotemporal variance and contour (flickering) of candidate blob regions. Classification is based either on SVM classifiers trained with fire and non-fire video frames or on a rule-based approach. For the background subtraction step, CERTH-ITI has implemented and tested 13 different background extraction algorithms from the literature including adaptive median, Gaussian mixture models, eigenbackground, etc.

     


    • Flame Detection (SUPCOM algorithm)

    SUPCOM developed a real-time flame detection system for video sequences captured by fixed and moving (PTZ) cameras. The first step of the proposed algorithm is moving objects detection: in  fact, this step is a part of flame characterization because movement is an important criterion of flame behavior. Therefore the moving objects in the video are defined by an off-theshelf algorithm. Among several different algorithms in the literature, the “adaptive background with persistent pixels” (ABPP) algorithm was selected. The second step of the algorithm is to extract a set of flame characteristics including colour, temporal intensity variance, spatial intensity variance, shape variation and shape complexity and classify them as flame or non-flame using a set of fuzzy Context Independent Variable Behavior (CIVB) classifiers.

     

     

    Video-Based Smoke Detection

    • Smoke Detection

    BILKENT’s close range smoke detection algorithm starts with a smoke colour region detection step to decrease the search area in a frame. Then, the candidate regions are divided into spatio-temporal regions, which are obtained by dividing the smoke coloured regions into 3D regions that overlap in time. Then covariance based feature extraction methods are applied to those blocks. The method combines colour, spatial and temporal domain information in feature vectors for each spatio-temporal block using region covariance descriptors. Classification of the features is performed only at the temporal boundaries of blocks instead of every frame. This reduces the computational complexity of the method.

    BILKENT also developed a long-range smoke detection algorithm for wildfire detection. Smoke at far distances exhibits different spatio-temporal characteristics than nearby smoke and fire. Therefore different methods should be developed for smoke detection at far distances rather than using nearby smoke detection methods. The long-range smoke detection algorithm developed by BILKENT consists of three main sub-algorithms: (i) slow moving object detection in video, (ii) smokecoloured region detection, (iii) correlation based classification. The method is similar to close range smoke detection except for detection of moving objects.  Real fire tests were conducted in  Rhodiapolis as shown in the following video.

     

     

    Software Platforms

    • Standalone Software Platform

    The fire and smoke detection algorithms developed within WP3 are integrated in a standalone software platform, which is used for algorithm testing and also serves as a demonstrator for dissemination purposes.  The software was developed using open source platforms and libraries like Qt, OpenCv, OpenGL, Intel’s IPP, ffmpeg, etc. It runs on a Windows operating system and has a graphical user interface which allows the application of flame and smoke detection algorithms to stored image sequences and videos,  the display of intermediate image processing results  and the selection/modification of algorithm parameters.

     

     

    Fire Detection using IR Technology

    • Multi-Spectral Image Sensing Platform

    A new multi spectral image sensing platform was designed and developed, within the FIRESENSE project by Xenics. The platform allows recording of Visible, Short Wave Infrared (SWIR) and Long Wave Infrared (LWIR) videos. This platform has already been used in field tests in pilot sites.

     

     

    • Fire Detection with a PIR sensor

    BILKENT developed a PIR sensor system, which can detect fire in the field of view. The developed system can be used for fire detection in large rooms. The flame flickering process of an uncontrolled fire and ordinary activity of human beings and other objects are modelled using a set of Hidden Markov Models, which are trained using the wavelet transform of the PIR sensor signal. Whenever there is an activity within the viewing range of the PIR sensor system, the sensor signal is analyzed in the wavelet domain and the wavelet signals are fed to a set of HMMs. A fire or no fire decision is made according to the HMM producing the highest probability.

     

     

    Wireless Sensor Network

    • OPNET Simulation

    The performance of routing and MAC protocols and reporting capability of a WSN in a fire environment is evaluated using OPNET Modeler simulations based on node models designed by Bogazici University. The simulation of fire propagation is provided by a grid-structured fire propagation module. The realistic fire propagation model used in the simulations is imported from the EFP software developed by CERTH-ITI and BILKENT . For this purpose, we modified the OPNET fire propagation module to use the ignition time map output provided by the EFP software.

    Before the start of a WSN simulation, fire propagation module in OPNET reads the ignition time map. The ignition times of the cells in the grid are scheduled. During a WSN simulation, the ignited cells are labeled as ON_FIRE by the fire propagation module on the scheduled times. WSN simulator uses grid information to create data and to destruct the nodes which affects the routing decisions and accessibility to sink(s). The nodes used in the simulations are video capable wireless sensor nodes. We assume that each node knows its position and the grids that are in the field of view of its camera which is 52°. In a sensing period, a sensor controls whether any of these cells is labeled as ON_FIRE. If the cell that the sensor resides in is labeled as ON_FIRE, the sensor is destructed. Otherwise, if any other cell is labeled as ON_FIRE, video frames are created and delivered to the sink. The traffic is created periodically for 10 seconds after every 60 seconds. In the simulations, we evaluate the following performance metrics: i) Frame delivery ratio, ii) Frame latency and iii) Energy consumption per succesful frame delivery.

    In the following demo, we present the MATLAB animations for the topology change and propagation of fire in the simulations of a WSN deployed in Rhodiapolis, Kumluca district.


     

    Control-Center User Interfaces

    • Estimation of Fire Propagation

    Fire Propagation Estimation (EFP) software was developed by CERTH-ITI and BILKENT. Fire spread calculations are mainly based on the fireLib software library or Fire Behavior SDK that implement the popular BEHAVE algorithm. According to BEHAVE, fire propagation depends on a number of parameters (e.g. ignition points, fuel model, humidity, wind, terrain data and other factors). These parameters are either provided by the user or are automatically estimated based on available data. Specifically,

    - Wind information can be obtained in real time from existing weather stations or from Internet weather portals.

    - Digital Terrain Models (DTMs) can be obtained by SRTM data (90m resolution) that are freely available for the whole world.

    - Moisture information can be manually provided or modelled based on data obtained by local weather stations or humidity sensors (WSN).

    - High resolution vegetation maps have been generated within FIRESENSE based on information from satellite images, ground truth and using the existing CORINE Land Cover classes.

    The EFP is an interactive software module that has been designed and developed to visualize the results of fire propagation estimation in a user friendly 3D-GIS environment, based on Google Earth API.


     

    Some of the functionalities of the EFP software are shown in the following video.

     

     

    • Control Center Software

    Three graphical interfaces will compose the CC/GUI to achieve an improved and comfortable visibility: i) main screen, ii) video screen and iii) maintenance screen. All these graphical interfaces have been developed by SUPCOM.
    As shown in the following demo video, the main screen of the GUI is the interface that shows the map of the supervised area and the location of all installed sensors/cameras (icons). For the sensors tab, we have a list of all types of sensors. Each one contains a list of all hosts installed superimposed on the map, and we can click on a sensor icon to get information about its status, incoming data flow and its parameters, which are reconfigurable. In the same way, clicking on a camera icon allows the display of a small window displaying in real time the video acquired by that camera.

     

     

    The video screen is dedicated to the cameras. This interface is composed of 2 displaying sub-windows. The top one is the biggest one as it is dedicated to the display of the video stream of a given camera selected by the user (in case of an alert). The second sub-window allows displaying all the videos streams from the remaining cameras. Furthermore, the user can monitor the cameras by zooming or tilting or panning (for PTZ cameras).

     

     

    bottom