=Paper=
{{Paper
|id=Vol-2146/paper95
|storemode=property
|title=Computer vision system for fire detection and report using UAVs
|pdfUrl=https://ceur-ws.org/Vol-2146/paper95.pdf
|volume=Vol-2146
|authors=Pablo Chamoso, Alfonso González Briones,Fernando De La Prieta,Juan M. Corchado
|dblpUrl=https://dblp.org/rec/conf/rsff/ChamosoGPC18
}}
==Computer vision system for fire detection and report using UAVs==
Computer vision system for fire detection and report
using UAVs
Pablo Chamoso, Alfonso González-Briones, Fernando De La Prieta and Juan M. Corchado
BISITE Digital Innovation Hub, University of Salamanca.
Calle Espejo 2, 37007. Salamanca, Spain.
Abstract
Continuous technological progress has led to great changes in our so-
ciety. UAVs (Unnamed Aerial Vehicles), commonly known as drones,
are one of the most significant technological advancements of the last
decade. UAVs offer a wide variety of new possibilities and have be-
come a tool that is used on an everyday basis. UAVs can be used in
fire control due to their ability to manoeuvre rapidly and their wide
range of operation. This article presents a review of the main uses of
UAVs in combatting fire. Special emphasis is placed on fire detection
techniques using computer and infrared computer techniques, as well
as the hardware systems that drones must incorporate to perform this
task. This article also presents a simple proposal for fire detection and
alert using UAVs and computer vision.
1 Introduction to fire detection using UAVs
The summer of 2017 was a very bad period for forests in Spain (almost 105,000 hectares of forest and shrubland
were destroyed by fire). In autumn, drought and high temperatures triggered dramatic fires in the north of Spain.
Sadly, hectares of burnt forest and material damage are not our only loss, many people have been wounded and
some died [1]. This problem is not unique to Spain: climate changes has contributed to increased drought and
heat waves, fuelling these ”superfires” all over the world. Until the 26th of August the European Forest Fire
Information System (EFFIS) counted 547,812 hectares affected by fires within the European Union - 60% more
than the average between 2008 and 2016. The flames have devoured Mediterranean Europe. In the Americas,
British Columbia has experienced the worst fire season since records have been kept. And between January and
February 2017, more than half a million hectares burned in Chile. In October, the horrific fires in California have
already burned an area larger than the entire city of New York [1]. Unfortunately, millions of hectares of forest
are burned each year due to fires, and large amounts of money are needed to put them out [3]. For this reason,
it is necessary to detect fire early enough to prevent it from spreading to other areas of the forest [5]. Traditional
methods of forest fire monitoring and detection employ either mechanical or human devices to monitor the
environment, but these methods can be dangerous and costly in terms of the required human resources. To this
end, work has been done on various methods and techniques to quickly monitor, detect and extinguish forest
fires before they become too large and uncontrollable. At first, the methods were more primitive and traditional,
such as Fire Watch Tower (human observation may be limited by different factors), Wireless Sensor network
(difficult or impossible to cover large areas) or Satellite and Aerial Monitoring (resolution of satellite imagery
is low) [4]. However, these primitive methods indicate the direction taken by the following proposals focused
on fire detection. For this reason, remote fire detection by means of electronic devices and from high altitudes
c by the paper’s authors. Copying permitted for private and academic purposes.
Copyright ©
In: G. Di Stefano, A. Navarra Editors: Proceedings of the RSFF’18 Workshop, L’Aquila, Italy, 19-20-July-2018, published at
http://ceur-ws.org
has become the main way to detect and monitor fires. UAVs and drones are a low-cost option for monitoring,
detecting and even fighting forest fires.
UAVs are not new; they have been in existence for dozens of years, however only recently have they become
popular. With the development of technologies, modern UAVs have also become more advanced and our possibil-
ities for using them have become greater, which poses new legal and regulatory questions [9]. Initially, they were
designed for military purposes: to transport balloon bombs and as a form of training for anti-aircraft weapons
during World War II. Today, their use is becoming more frequent and they can carry out a wider range of tasks
in both the military and professional sectors. These unmanned aerial vehicles allow the integration of remote
sensing techniques that can also meet the requirements of spatial, spectral and temporal resolution [7], [8], [9]
serving as a tool for the management of the collected data. Unmanned aerial vehicles allow for the execution of
long-term, monotonous and repeated tasks that go beyond human capabilities. This has led to increased global
attention to the applications of UAVs in forest fires in recent years [10], [11], [12], [13], [14] or [15].
UAVs have been widely used for forestry, agriculture and livestock. For example, it has been used to obtain
scans of large areas of the livestock system [16]. Counting and monitoring of animal species can be performed
with video recordings taken by UAVs. Moreover, the system keeps track of the number of detected animals by
analyzing the images taken with the UAV’s cameras. Another work using UAVs presents a system capable of
detecting ground vehicles through aerial images taken by a UAV in real time. In addition, the system offers
the possibility of guiding the UAV autonomously to keep track of a vehicle that has been detected previously
[17]. Other investigations make use of the functionalities provided by existing MultiAgent Systems (MAS) to
coordinate tasks among UAVs. The article presents a case study that uses the capabilities to perform the
detection of oil spills, [18].
In [19], an octacopter is presented to capture photographs and images in multiple formats, and to carry
sensors and scientific-technical measuring equipment. It is a collapsible, lightweight, multi-rotor, vertical take-off
UAV (Unmanned Aerial Vehicle) aircraft, made of aerospace materials with maximum resistance. It includes a
communication center and a station base, all of which are transportable, lightweight and compact.
The objective of this paper is to review the state of the art about the use of UAVs in fire detection in large
forested areas. Specifically, there will be a review of aircraft fire detection techniques.
The rest of the article is structured as follows: Section 2 describes diverse state of the art proposals in the
area of UAVs and fire detection and vision computer technologies. Section 3 provides a full description of the
system proposed in this work, including the functionality of each of its components. Section 4 details the case
study and discusses the results obtained from this work. Finally, Section 5 outlines the conclusions drawn from
this research.
2 State of the art of related techniques
This section studies the main techniques to be incorporated into a UAV for the detection and communication of
fire alerts to the people responsible for the forest area. This will allow them to take the corresponding actions.
2.1 Computer Vision in UAVs
One of the main problems detected in the application of computer vision techniques is that most of these
techniques employ classifiers that must be trained. These classifiers need a large number of images of forest fires
for their correct classification, such as Eigenfaces, Fisherfaces, LBP [20]. Often researchers need to download
images from Internet search engines or have images of fires [21]. This makes it very difficult to test and improve
proposed algorithms. Another possibility is to use infrared images, which are easier to process than the visible
images because the intensity of the fire pixels is much greater than that of the other pixels [23]. The detection
of a fire zone of an infrared image is to find the threshold that differentiates the pixels belonging to the fire from
those of the background. There are several algorithms to perform this task that can be applied to the detection of
fire pixels [22], [24], [25], [26]. However, this technique also has a number of limitations. One of these limitations
is that areas near fire such as hot gases can produce a difference between areas of fire that appear in the visible
domain and those in the infrared domain. A paper showing this deficiency shows that the near-infrared domain
produces areas of forest fires that are very similar to those obtained in the visible domain [27]. Considering that
it is easier to detect the fire pixel in infrared images but that visible images remain the reference, new algorithms
for fire pixel detection using image fusion could be developed [28].
2.2 Notification systems in UAVs
Once the UAVs have detected fire in a recoded video, it is notified to the notification platform. One of the
most widely used technologies is XBee. XBee are small electronic chips capable of communicating wirelessly
with each other. XBee modules are integrated solutions that provide a wireless medium for interconnection and
communication between devices. These modules use the network protocol called IEEE 802.15.4 to create FAST
POINT-TO-MULTIPOINT (point-to-multipoint) networks; or for PEER-TO-PEER (point-to-point) networks.
They were designed for applications that require high data traffic, low latency and predictable communication
synchronization. So basically, XBee is owned by Digi based on the Zigbee protocol. Another option is the use
of WiFi. When using Wifi, it is necessary to mark the forest area with access points so that there is a local
network, enabling control over the area. Knowledge of black spots in the whole area is also important so that
they can be avoided in communications.
2.3 Shortcomings in the detection of fires using UAVs
For fire identification, it is necessary to combine classifiers and infrared images to minimize the deficiencies of
these two techniques. That is to say, to use classifiers trained with fire images and the use of infrared images. For
communication with the UAVsy alerts, the use of WiFi is preferable. Communications via XBee cannot exceed
100m, which greatly limits their use. Therefore, the main role of the proposed base station control software is
to offer autonomous control through (WiFi Communication). This is done by applying a self-contained flight
algorithm designed for this purpose that follows a series of points entered by the software with the help of the
UAV status received via telemetry. In addition, the software will allow you to view the configured flights, as well
as receive notifications about the coordinates at which a fire has been detected.
3 Proposed system
The previous section began with a study of the existing remote-control technologies in each of the three parts
into which the proposal can be divided: aircraft, communication and control. The initial objective was to offer
a complete system that would improve these existing technologies in the field of fire detection.
As far as the aircraft is concerned, there are two distinct parts that have an influence when it comes to
flying: its assembly or chassis and its electronics. The analysis of these parts leads to the conclusion that the
improvements that can be made in the available resources are minimal if not nil for the purposes of this project.
This is because there are large international companies that have years of experience and invest millions of euros
in the development of both chassis and electronic stabilization systems, as well as open source projects conducted
by independent developers collaborating on do-it-yourself (do it yourself) platforms whose systems are much less
stable than the above-mentioned international companies, so competing against them would make no sense.
On the other hand, the communication and control blocks are very similar in all the existing projects, they do
not reflect the great advances made in the flight system. All of them use radio station systems for communication
which are very stable and offer a long communication range but do not allow data transmission in digital
form, so communication is limited to flight orders using the previously explained PPM transmission. This type
of transmission requires the presence of a second communication module for telemetry transfer and a third
communication system for the transmission of video in real time.
As for the control system used to control the UAV systems, they all use radio stations and are very sophisticated
what makes them easy to integrate with PPM radio systems, which can even adapt telemetry reception modules
and view them on a digital display. From our analysis we propose to design a system capable of simulating the
connection of radio systems to control any UAV stabilizer (if possible the best on the market) and capable of
transmitting telemetry digitally together with flight orders and video using only Wi-Fi connection and remotely
controlled from the ground with a gamepad connected to a computer instead of using radio stations, displaying
telemetry information and video transmitted from the multirotor on the computer screen. With this Wi-Fi
communication system which is easily accessible to consumers, has high-powered access points which allow for
connections of up to 50km and its cost is much lower than long-range radio systems. Although it does not exceed
the distance of the aforementioned radio links, but given the limited flight time of the UAV due to the battery
life (common element for all systems), which today barely exceeds 30 minutes, this distance range is more than
sufficient for the system to be developed.
In addition, the possible control carried out with a computer is not comparable today to that carried out by
a radio station, so designing a control software capable of controlling and displaying the telemetry transmitted
by the UAV through a computer would substantially improve the leveraged systems. In addition, it could be
seamlessly integrated with the above-mentioned Wi-Fi communication system, which would receive the data
measured by the sensors and the video and, finally, could transmit the flight orders from the control software to
the UAV.
Figure 1: Descriptive diagram of the complete system.
3.1 Hardware
The hardware required to control any of the existing stabilizers on the market must be capable of transmitting
the information to the stabilizer through a series of outputs (at least 8) as if it were a radio receiver.
Radio receivers use manufacturer-specific settings to order the data to be transmitted, but maintain a standard
for the connection, using three-pin connectors for each of the channels, through which the data associated with
each is sent. The only exception is the case of the manufacturer Futaba, which uses a proprietary protocol
(S-BUS) with which a single three-pin connector is able to send information from all radio channels. However,
all controllers are compatible with the above-mentioned standard, while only a few are compatible with Futaba’s
S-BUS protocol.
We chose a Raspberry PI 3 microcontroller as hardware for performing this task, it is capable of running a
Linux operating system and it allows to connect the APM 2.5 through one of its USB ports and connects a Wi-Fi
adapter to the other port. All these features made us opt for this hardware. Therefore, the controller consists of
a Raspberry PI 3 and an APM 2.5 connected via the USB port.
This hardware has to interact with at least the stabilizers of the multirotor. There are many multi-rotor
stabilizers on the market, so the DJI Wookong-M stabilizer has been chosen. It provides very precise stabilization
and control.
3.1.1 Hardware Connections
The LiPo battery communicates directly with the ammeter of the APM 2.5, which performs consumption mea-
surements. The ammeter data cable is connected to the APM 2.5 itself (the result of the measurement is sent)
and the continued power supply is connected to the PMU of the DJI Wookong-M, to the multi-turner drives and
to the UBEC voltage regulator. PMU connects to the other components of the DJI Wookong-M in the manner
specified by the manufacturer, DJI Innovations, as shown in Figure 2.
Figure 2: Connection between DJI Wookong-M components.
The drives will connect their data cable to the DJI Wookong-M’s MC inputs in the order specified by DJI
Innovations and its banana connectors to the corresponding motor connectors regardless of the order (a test
without propellers is required to check the direction of rotation before the first flight). The UBEC will be
connected to the Raspberry PI model B GPIO pins (pin 2 for power and pin 6 for ground) and power. It will
also do this with the flight camera and the high-resolution camera stabilizer. The Raspberry PI connects through
one of its USB 2.0 ports with the USB Wi-Fi antenna to establish communication with the access point and with
the other USB port it connects to the APM 2.5 microUSB to exchange data (flight orders in one direction and
sensor information in the other). The flight IP camera will be connected to the Ethernet port of the Raspberry
PI. The APM 2.5 will connect each of its first 6 outputs (marked as outputs) to the DJI Wookong-M’s MC with
3-pin connectors so that the corresponding signal will be sent to each of the DJI’s channels through each of the
outputs. If a camera stabilizer is available, it will connect to outputs 7 (pitch movement) and 8 (roll movement)
of the APM 2.5, Figure 3.
Figure 3: Gimbal seen in a frontal and lateral way with roll and pitch movement.
With the present connection and the type of each link between components shown above, the wiring diagram
of the electronic components that control and monitor the multirotor is as shown in Figure 4.
Figure 4: Wiring diagram between electronic components.
3.2 Software
The main role of the base station’s control software is to offer the necessary functionality to perform remotely
controlled flights manually thanks to the use of the previously mentioned gamepad or to perform autonomous
control through the application of an autonomous flight algorithm designed for this project. This algorithm
follows a series of points introduced by the software with the help of the UAV status received through telemetry.
The control software offers a configurator that allows you to set the parameters needed to perform a new flight
or to schedule a future flight without having to connect the UAV. In addition, it is possible to export the
configuration into files so that they can be used as flight profiles, selectable according to the type of flight to
be performed or the specific configuration of the multirotor, as well as other parameters such as the software
language.
The route programming mode is designed so that whoever programs the routes that will later be travelled
by the UAV, will be able to load these routes on the UAV at the time of flight and will be able to repeat the
same routes without having to plan them again, thus this mode helps save time. Scheduled routes are exported
in files with extension .route and can be loaded from the main configuration or even with the software started
in flight mode, from where the route can also be configured without the need to have previously programmed
it in route programming mode. Another of the features of the control base station software is that it will be
executed from the places where the flights are performed, which will be in most cases areas where there is no
Internet connection (or at least not at an acceptable speed), so you cannot depend on the connection during the
flight to show the information regarding the area where the flight is performed, so the map of the area must be
previously obtained and geo-positioned. The image can be obtained from anywhere as long as it is perpendicular
to the terrain, this allows for photographs taken by the UAV to be used, but the coordinates of at least three
of the four corners must be known in order to assign a coordinate (longitude and latitude) to each of the pixels
in the image. This is called image geolocation, which can be done through the control software if an image is
Figure 5: Main configuration of the control software.
included for the first time without associated geolocation data or using the developed software to facilitate the
acquisition of terrain images using Google Maps, accessible online and free of charge, currently published on
http://servidor-online.com/hawk-geoposition/
(a) View of the route for sale. (b) View of the collected video
Figure 6: Flight visualization platform
The system used for the geopositioning stores in the metadata of the image the information
of the coordinates of the corners in a transparent way to the user without the user even real-
izing it, since no external files are used, but rather, as metadata, they are included in the im-
age. The metadata format is formed with the coordinates of each corner, in the following format:
GEO − Inf ormation, LatitudeA : LongitudeA , LatitudeB : LongitudeB , LatitudeC : LongitudeC :
LongitudeC , LatitudeD : LongitudeD , 0.00000000Example : GEO − Inf ormation, 40.967755287228385 :
−5.62173769696952938, 40.96775528287228385 : −5.628174998588699, 40.96410968463514 :
−5.621737696952938, 40.96410968463514 : −5.628174998588699, 0.0000000000 Another of the main features of
the software is the control of the UAV in two ways: manual (with a pilot on the ground via the gamepad) and
autonomous or automatic (without a pilot, performing a calculation of the movement automatically and based
on the introduction of a series of route points).
3.3 Fire Detection Technology
An algorithm has been developed to detect fires using the Python script language. The OpenCV and Numpy
APIs have been used for this purpose. The video taken by the UAV camera is processed frame by frame. The
colour frame of each frame is changed to HSV. The HSV color space is very similar to the way we humans
perceive the images of the environment, more so than even the RGB space. A mask with the upper and lower
color values is then defined. A mask is applied to the frame and only the colors in the range we have defined are
visible. In figure 6, we can see how the developed algorithm that is deployed in the Raspberry PI of the UAV
works.
Figure 7: Image of the fire detection algorithm.
4 Conclusions and Future Work
This paper has made an overview of the main computer vision techniques for fire detection in forested areas.
Unlike ground or space systems, the cost of deploying these techniques in UAVs is low and no humans are put
at risk when performing this activity.
Since the density of trees in woodland areas impedes the detection of small fires through simple monitoring.
The combination of sensorization and artificial vision techniques in a UAV is the best choice for fire detection in
forests.
One of the drawbacks encountered in carrying out this work is the possibility that the smoke may block the
images of the fire, although before the smoke becomes dense, the fire should have already been detected. Sunlight,
for example, can cause false positives, so the combination of infrared images and the proposed algorithm on the
collected video can contribute to robust detection of forest fires, including high probability of detection, low false
alarm rates and improved adaptive capabilities in various environmental conditions.
The use of infrared images, which are easier to analyze, often causes hot gases to also be detected as fire.
Since these areas are similar to those of fire, they can produce a difference between the fire areas that appear in
the visible domain and those in the infrared domain.
Acknowledgments
This work was developed as part of “MOVIURBAN: Máquina social para la gestión sostenible de ciudades
inteligentes: movilidad urbana, datos abiertos, sensores móviles”. ID: SA070U 16. Project co-financed with
Junta Castilla y León, Consejerı́a de Educación and ERDF funds.
References
[1] WWF España. Incendios Forestales. 2018. https : //www.wwf.es/nuestrot rabajo/ bosques/incendiosf orestales/
[2] WWF España. Informe Incendios Forestales. 2018. https : //www.wwf.es/nuestrot rabajo/ bosques/incendiosf orestales/in
[3] Martinez-de Dios, J. R., Arrue, B. C., Ollero, A., Merino, L., & Gómez-Rodrı́guez, F. Computer vision
techniques for forest fire perception. Image and vision computing, 26(4), 550-562 , 2008
[4] Vipin, V. Image processing based forest fire detection. International Journal of Emerging Technology and
Advanced Engineering, 2(2), 87-95, 2012
[5] Lin, H., Liu, Z., Zhao, T., & Zhang, Y. Early warning system of forest fire detection based on video technology.
In Proceedings of the 9th International Symposium on Linear Drives for Industry Applications, Volume 3
(pp. 751-758). Springer Berlin Heidelberg. January, 2014
[6] Chisholm, R. A., Cui, J., Lum, S. K., & Chen, B. M. UAV LiDAR for below-canopy forest surveys. Journal
of Unmanned Vehicle Systems, 1(01), 61-68. 2013
[7] Everaerts, J. The use of unmanned aerial vehicles (UAVs) for remote sensing and mapping. The International
Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 37(2008), 1187-1192 ,
2008
[8] Berni, J. A., Zarco-Tejada, P. J., Suárez, L., & Fereres, E. Thermal and narrowband multispectral remote
sensing for vegetation monitoring from an unmanned aerial vehicle. IEEE Transactions on Geoscience and
Remote Sensing, 47(3), 722-738, 2009
[9] Chamoso, P., González-Briones, A., Rivas, A., Bueno de Mata, F. & Corchado, J.M. The Use of Drones in
Spain: Towards a Platform for Controlling UAVs in Urban Environments.1416, 18 (5), 2018
[10] Ambrosia, V. G., & Zajkowski, T. Selection of appropriate class UAS/sensors to support fire monitor-
ing: experiences in the United States. In Handbook of Unmanned Aerial Vehicles (pp. 2723-2754). Springer
Netherlands, 2015
[11] Merino, L., Martı́nez-de Dios, J. R., & Ollero, A. Cooperative unmanned aerial systems for fire detec-
tion, monitoring, and extinguishing. In Handbook of Unmanned Aerial Vehicles (pp. 2693-2722). Springer
Netherlands, 2015
[12] Shahbazi, M., Théau, J., & Ménard, P. Recent applications of unmanned aerial imagery in natural resource
management. GIScience & Remote Sensing, 51(4), 339-365, 2014
[13] Sharifi, F., Zhang, Y. M., & Aghdam, A. G. Forest fire detection and monitoring using a network of
autonomous vehicles. In The 10th International Conference on Intelligent Unmanned Systems (ICIUS 2014)
(Vol. 29). 2014, September
[14] Bosch, I., Serrano, A., & Vergara, L. Multisensor network system for wildfire detection using infrared image
processing. The Scientific World Journal, 2013.
[15] Merino, L., Caballero, F., Martı́nez-de-Dios, J. R., Maza, I., & Ollero, A. An unmanned aircraft system
for automatic forest fire monitoring and measurement. Journal of Intelligent & Robotic Systems, 65(1-4),
533-548, 2012
[16] Chamoso, P., Raveane, W., Parra, V., & González, A. UAVs applied to the counting and monitoring of
animals. In Ambient Intelligence-Software and Applications (pp. 71-80). Springer, Cham, 2014
[17] Pérez, A., Chamoso, P., Parra, V., & Sánchez, A. J. Ground vehicle detection through aerial images taken
by a UAV. In Information Fusion (FUSION), 17th International Conference on (pp. 1-6). IEEE, 2014, July
[18] Chamoso, P., Pérez, A., Rodrı́guez, S., Corchado, J. M., Sempere, M., Rizo, R., ... & Pujol, M. Modeling
Oil-Spill Detection with multirotor systems based on multi-agent systems. In Information Fusion (FUSION),
17th International Conference on (pp. 1-8). IEEE, 2014, July
[19] Bernabéu, C., Corchado Rodrı́guez, J. M., Rodrı́guez, S., & Chamoso, P. Aracnocóptero: An Unmanned
Aerial VTOL Multi-rotor for Remote Monitoring and Surveillance, 2011
[20] González-Briones, A., Villarrubia, G., De Paz, J. F., & Corchado, J. M. A multi-agent system for the
classification of gender and age from images. Computer Vision and Image Understanding, 2018
[21] Forest health, natural resources, fire, trees, wildfire, silviculture photos. http : //www.f orestryimages.org/.
(Accessed: 10 March 2018).
[22] J. Ramiro Martı́nez-de Dios, Luis Merino, Anı́bal Ollero. Fire detection using autonomous aerial vehicles
with infrared and visual cameras. Proceedings of the 16th IFAC World Congress 2005
[23] J.R. Martı́nez -de Dios, L. Merino, F. Caballero, A. Ollero. Automatic forest-fire measuring using ground
stations and unmanned aerial systems. Sensors, 11 (6), pp. 6328-6353, 2011
[24] Nobuyuki Otsu. A threshold selection method from gray-level histograms. Automatica, 11 (285–296), pp.
23-27, 1975
[25] T.W. Ridler, S. Calvard. Picture thresholding using an iterative selection method. IEEE Trans. Syst. Man
Cybern., 8 (8), pp. 630-632, 1978
[26] Josef Kittler, John Illingworth. Minimum error thresholding. Pattern Recognit., 19 (1), pp. 41-47, 1986
[27] L. Rossi, T. Toulouse, M. Akhloufi, A. Pieri, Y. Tison. Estimation of spreading fire geometrical character-
istics using near infrared stereovision. IS&T/SPIE Electronic Imaging. pages 86500A–86500a . (March 2013)
[28] Tom Toulouse. Estimation par stéréovision multimodale de caractéristiques géométriques d’un feu de
végétation en propagation (Ph.D. thesis). University of Corsica (November 2015)