=Paper= {{Paper |id=Vol-2030/HAICTA_2017_paper90 |storemode=property |title=Use of Unmanned Aerial Vehicles for Wildlife Monitoring |pdfUrl=https://ceur-ws.org/Vol-2030/HAICTA_2017_paper90.pdf |volume=Vol-2030 |authors=Pavel Simek,Jan Pavlik,Jan Jarolimek,Vladimir Ocenasek,Michal Stoces |dblpUrl=https://dblp.org/rec/conf/haicta/SimekPJOS17 }} ==Use of Unmanned Aerial Vehicles for Wildlife Monitoring== https://ceur-ws.org/Vol-2030/HAICTA_2017_paper90.pdf
        Use of Unmanned Aerial Vehicles for Wildlife
                      Monitoring

   Pavel Šimek1, Jan Pavlík1, Jan Jarolímek1, Vladimír Očenášek1, Michal Stočes1
  1
   Department of Information Technology, Faculty of Economics and Management, Czech
   University of Life Sciences Prague, e-mails: simek@pef.czu.cz; pavlikjan@pef.czu.cz;
              jarolimek@pef.czu.cz; ocenasek@pef.czu.cz; stoces@pef.czu.cz



       Abstract: Recent developments in UAV (unmanned aerial vehicle)
       engineering have pushed the usage of the so-called “drones” into the
       mainstream. The omni-purpose nature of these vehicles has caused increase in
       customer demand across various fields. Mass production has resulted in drop
       in prices, especially for less sophisticated recreational vehicles. However, in
       order to capture quality imagery for further processing the technical
       sophistication of the mounted camera is the deciding factor, not the UAV
       itself. Many researchers are looking for options to exploit this technology in
       different fields, one of which is wildlife monitoring. This paper aims to present
       basic overview of knowledge in the area of aerial wildlife censusing and the
       progress made during a research conducted at Czech University of Life
       Sciences Prague.


       Keywords: Image recognition, UAV, wildlife, drones, thermal imaging.




1 Introduction

Efforts to accurately estimate numbers of wildlife animals are around for centuries.
Nowadays, these census results are the basis for determining the amount of hunting
needed to ensure stable populations. In most European countries, including the Czech
Republic, the population of hoofed wildlife has increased in recent decades, causing
more and more damage to forest and field cultures (Bartoš et al., 2010). In order for
the number estimates to help fulfill a control function, results must roughly
correspond to reality. Nowadays, commonly used methods in the Czech Republic
affect only 10-33% of the actual population. However, the accuracy of estimates of
game conditions is eloquently evidenced by a comparison of the spring basal state
with the number of animals being hunted (Kotrba et al., 2005). According to statistics
from some countries, occasionally more animals were hunted than the amount of
animals that should be present according to the census, which is sometimes also the
case for the Czech Republic. Another common method is to use data from camera
traps, but the results from such a survey can often be very unreliable (Claridge and
Paull, 2014; Foster and Harmsen, 2012). That is why new alternative and more
efficient methods are being sought.




                                              795
   Better results can usually be achieved using a powerful technology, but its use
alone does not guarantee the quality of the outputs. The first findings on aerial census
have been published more than forty years ago (Graves et al., 1972). Censusing of
game from an aircraft or a helicopter is practiced for example in the Scandinavian
countries (Liberg et al., 2010). Thermal vision is often also used (e.g. Gill et al.,
1997; Focardi et al., 2001 and others), but mostly for surface based deployment. In
contrast, mainly in the US and Canada, thermal imaging is expanding not only in
ground censuses, but in aerial imaging also. There are many published results of
monitoring of various animal species in various environments (Wyatt et al., 1980;
Bayliss and Yeomans, 1989; Wiggers and Beckerman 1993; Focardi et al., 2001;
Garel et al., 2010; Fuentes et al., 2015).
   The current development of drones and artificial intelligence tools for image
evaluation brings a new dimension to the use of aerial counting and game monitoring
methods. Unmanned vehicles are nowadays, mainly due to the massive expansion of
the so-called multi-copters (multiple motor helicopters), known mainly by the term
"drones". Officially, they are called by the term UAV - Unmanned Aerial Vehicle.
Unmanned vehicles offer different ways of imaging by combining imagery from
varying flight altitudes. This at the same time presents new options for retrieving data
from selected areas in real time. Some UAVs are capable of covering an area of
several square kilometers, making them a cheaper and more affordable alternative to
conventional aircraft. Because of the lower scanning height, it is also possible to
obtain very detailed images from an unmanned vehicle (Eisenbeiss, 2011). In
addition to capturing images, monitoring can also be performed “on-the-fly” without
recording, but only transmitting the video to the operator screen.



2 Data acquisition

In order to obtain imagery data for developing the most precise image processing
methodology, our research team has conducted several preliminary flights. In
cooperation with employees of Military Forests and Farms (state company) two
enclosed areas were selected that contain a known number of animals. First area is
without any significant vegetation apart from few trees around the borders (see
Figure 1). It is basically a small outdoor livestock confinement area. Second chosen
area is larger with few dozen trees and some bushes and more closely resembles a
game reserve park.




                                           796
Fig. 1: Ground photo of first testing site


2.1 Equipment

Drone DJI S900 (see Figure 2) was used to conduct the preliminary data acquisition.
It was equipped with standard camera as well as thermal camera Flir Tau 2. GPS
module was also installed in order to help with navigation and to provide the ability
to mark obtained data with location coordinates.




                                             797
Fig. 2: DJI S900 multi-copter used to obtain preliminary testing data

  The drone was used in compliance with all current legislation regarding use of
UAVs in Czech Republic.


2.2 Flight specifications

Before the actual data acquisition, it was necessary to determine flight height, speed
and pattern to obtain best possible data for analysis. The maximum possible flight
height for drones in Czech Republic is 300 meters. Such distance might however
result in video recording that due to its resolution will not provide sufficient detail for
the image recognition algorithms to function properly. If the flight height was set too
low, it would result in better quality imagery, but it would take considerably more
time to conduct such flight and cover entirety of given area. Also low flight height
has higher risk of animals noticing the drone and running from it. As a compromise a
flight height between 50 to 60 meters was selected.
   In order to cover an entire area where the measurements will be taking place a
standard “zig-zag” or “lawn mower” pattern was selected as most efficient. In case
the area is not convex it is possible to divide it into smaller convex polygons. Second
option is to circumscribe the area with smallest possible bigger convex area. This
may result into capturing imagery of areas that is not of importance, but if the cut-out
area is not very large, it might be more efficient than flying several smaller paths
over several polygons. Flying in this pattern (along the longer side) will minimize the
number of turns the UAV has to make.




                                               798
   Another issue is the selection of overlap throughout the pattern. Movement of
animals is to be expected and it is possible that a herd might move from one section
of given area onto another, therefore avoiding being captured by the UAV camera on
both passes. Or the opposite might happen – that same animals will be captured two
or more times in different sections of the flight path. Even if we assume that animals
will be stationary, a certain overlap is necessary to prevent issues of animals being
captured by the camera only on the edges of the screen, because image recognition
algorithms generally work better if the objects being searched for are in the center
area. Since the testing sites are both relatively small, we opted for high overlap
during the preliminary data acquisition. For further flights we plan to adjust the
actual overlap based on the extent of observed animal movement.
   Lastly it is necessary to determine the camera angle. If the camera is positioned
too much to the side (more horizontally than vertically), the actual distance to the
ground would increase and also any obstructions like trees and terrain would have
more significant impact on the imagery. If the camera is positioned fully vertically
however, the captured images of animals would be fully top-to-bottom, therefore
missing the animals’ extremities. A picture without limbs might drastically reduce
successful object classification by the image recognition algorithm as suggested by
the research results by Chrétien et al., (2015). For the testing we selected a relatively
high angle of depression - approximately 55 to 65 degrees.


2.3 Initial flight results

The testing flights on both chosen sites provided several crucial insights. First of all,
that the sound of multi-copter engine as well as its presence above the area scared the
animals into running away from it. Since it was fenced area, once the herd reached
the edge the animals clumped together and stood still waiting (see Figure 3). This
may affect the monitoring both positively and negatively. If the herd is approached
by the UAV from unfavorable angle it may cause them to run aside therefore
avoiding being captured by the camera. However if the herd runs away in the
direction of the flight and is therefore “chased”, it may prove incredibly useful. If
that were to happen, there would be more images available providing more data for
image recognition. Also in case of areas with higher vegetation this may cause the
animals to significantly move, increasing the chances of capturing an image with
unblocked view when chasing the animals through a clearing.




                                           799
Fig. 3: Aerial photo of first testing site

   The importance of getting a clear view of the animals became apparent during test
flight over the second chosen testing site (see Figure 4). Even though the amount of
trees is not very high (average Czech forests are much denser), they provided
significant cover for the animals to hide. This along with the effect of terrain shadow
(can be seen in both Figures 3 and 4), can make obtained imagery unsuitable for
accurate processing.




Fig. 4: Aerial photo of second testing site




                                              800
3 Data processing methods

In order to determine the best suitable methods for image processing our team has
conducted the following overview of current state of the art:
   Image processing of a recognized object consists of a series of steps. First, you
need to capture and digitize the image, and then use the image preprocessing method
to improve the image, which is especially focused on grayscale, brightness and
contrast adjustment, histogram equalization, image sharpening, and various filtration
methods. Another important step is to use segmentation methods to distinguish a
recognized object from the background. It is primarily segmentation by thresholding,
image dyeing algorithms, edge detection and linking methods and various algorithms
for filling objects. The next image-processing phase is the object description. The
most well known methods of object description include the momentum method,
Fourier descriptors and chain codes that can also be used for so-called structural
description of objects. The final stage of the image processing process is the object's
classification (recognition). The task of classification is to classify objects found in
the image into a group of previously known classes (Parker J.R., 2011). Custom
object recognition can be accomplished using artificial intelligence or statistical
analysis. Typically, the acquired description of the object will be presented to the
classifier, which can determine with certain degree of accuracy, which object it is.
The classifier is familiar with the objects that can be submitted to it. This process is
called learning.
   An example can be the SIFT (Scale-Invariant Feature Transform) method, which
was first used to detect objects in the image scene. According to Noviyanto and
Arymurthy (2013) for the identification of bovine animals, SIFT method achieves the
best results. From the training set of images, vector object vectors are calculated,
which are subsequently searched in test pictures. If the vectors obtained during
training and testing were sufficiently matched, the object was detected and
recognized at the same time. This principle can, however, be equally well used in
classification. From the training sets (one for each class), the signifier vectors are
obtained by the algorithm and are then compared with the vectors counted for the test
set. In the next step, using the selected classification method, it is decided to divide
the elements of the test set into individual classes.
    SIFT consists of four main steps (Lowe, 1999 and Lowe, 2004):
         1. detection of extremes within scale-space
         2. refinement of the location of significant points
         3. assigning orientation to significant points
         4. compiling a descriptor of significant points
   Yu et al., (2013) have published an analysis that shows that the combination of
SIFT and CLBP (Compound Local Binary Pattern) can serve as a useful technique
for recognizing animals in real complex situations. They use enhanced spatial
pyramid matching (ScSPM), which extracts dense SIFT descriptors and mobile-
structured LBP (or CLBP) as a local function that generates global functions via
weighted sparse encoding and max pooling using the multi-scale kernel pyramid, and
sorts images according to the linear support vector machine algorithm.




                                           801
4 Conclusions

The two test flights conducted to obtain initial data for further analysis provided
several key insights. Even though the UAV flies at relatively high altitude, the engine
noise is still loud enough to startle or scare animals and cause significant movement
of the entire herd. Images taken by a regular camera show that unless animals are
captured on top an area with little to no vegetation, the imagery might be unsuitable
for deployment of image recognition algorithms. Trees and larger bushes provide
cover to the animals and are a significant obstruction. The effect of terrain shadow
also reduces the perspicuity of captured images. Overall the test flights suggest that
for monitoring in forest environments use of regular CCD camera might be
insufficient.
   Our research efforts will therefore shift more towards the use of thermal imaging
or night vision imaging using the dynamic light spot method. Both of these
techniques are more likely to provide quality data suitable to be used as an input for
image recognition algorithms. As for the image recognition itself, we are planning to
use the SIFT method as our first option, since it has proven as highly suitable for
successfully recognizing and classifying images of wildlife hoofed animals by other
researchers. The issue is that so far the SIFT method was mainly used for ground
based imagery gathered by regular camera traps. Images taken from aerial view at a
high depression angle may not result in accurate assessments. In case we do not
succeed with this approach we will either adjust the flight specifications (flight
altitude, camera angle etc.) to better suit the SIFT method or look for a different
algorithm altogether.

Acknowledgments. The results and knowledge included herein have been obtained
owing to support from the following institutional grants: Grant No. 20171016 of the
Internal grant agency of the Faculty of Economics and Management, Czech
University of Life Sciences Prague titled „Use of image recognition methods for
aerial wildlife monitoring”.



References

1. Bartoš, L., Kotrba, R., Pintíř, J. (2010): Ungulates and their management in the
   Czech Republic. In: European Ungulates and their Management in the 21st
   century (Apollonio M., Andersen R. a Putman R. eds.), Cambridge University
   Press, London, UK, p. 243- 261.
2. Bayliss, P., Yeomans, K. M. (1989): Correcting bias in aerial survey population
   estimates of feral livestock in northern Australia using the double-count
   technique. J. Appl. Ecol. 26: 925-933.
3. Chrétien, L. P., Théau, J., Ménard, P. (2015): Wildlife multispecies remote
   sensing using visible and thermal infrared imagery acquired from an unmanned
   aerial vehicle (UAV). The International Archives of Photogrammetry, Remote




                                          802
    Sensing and Spatial Information Sciences, Vol. 40, Iss. 1, p. 241-248. ISSN:
    16821750
4. Claridge, A.W., Paull, D.J. (2014): How long is a piece of string? Camera
    trapping methodology is question dependent. Camera Trapping: Wildlife
    Management and Research p. 206-214.
5. Eisenbeiss, H. (2011): The Potential of Unmanned Aerial Vehicles for Mapping.
    [Online].                                                     http://www.ifp.uni-
    stuttgart.de/publications/phowo11/140Eisenbeiss.pdf.
6. Focardi, S., De Marinis, A. M., Rizzotto, M., Pucci, A. (2001): Comparative
    evaluation of thermal infrared imaging and spotlighting to survey wildlife.
    Wildlife Society Bulletin 29: 133-139.
7. Foster, R.J., Harmsen, B.J. (2012): A critique of density estimation from camera-
    trap data. Journal of Wildlife Management, Vol. 76, Iss. 2, p. 224-236. ISSN:
    0022541X.
8. Fuentes, M. M. P. B., Bell, I., Hagihara, R., Hamann, M., Hazel, J., Huth, A.,
    Seminoff, J. A., Sobtzick, S., Marsh, H. (2015): Improving in-water estimates of
    marine turtle abundance by adjusting aerial survey counts for perception and
    availability biases. Journal of Experimental Marine Biology and Ecology, Vol.
    471, p. 77-83. ISSN 0022-0981.
9. Garel, M., Bonenfant, C., Hamann, J. L., Klein, F., Gaillard, J. M. (2010): Are
    abundance indices derived from spotlight counts reliable to monitor red deer
    Cervus elaphus populations? Wildlife Biology, Vol. 16, Iss. 1, p. 77-84. ISSN
    0909-6396.
10. Gill, R. M. A., Thomas, M. L., Stocker, D. (1997): The use of portable thermal
    imaging for estimating deer population density in forest habitats. J. Appl. Ecol.
    34: 1273-1286.
11. Graves, H. B., Bellis, E. D., Knuth, W. N. (1972): Censusing white-tailed deer by
    airborne thermal infrared imagery. J. Wildlife Manage. 36: 875-884.
12. Kotrba, R., Bartoš, L., Pluháček, J,. Dušek, A. (2005): Estimating game numbers
    by thermovision method - comparative study of published information (reseearch
    study for Czech Forests), Prague
13. Liberg, O., Bergström, R., Kindberg, J., Von Essen, H. (2010): Ungulates and
    their management in Sweden. In: European Ungulates and their Management in
    the 21st century (Apollonio M., Andersen R. a Putman R. eds.), Cambridge
    University Press, London, UK, p. 37-70.
14. Lowe D. G., (1999): Object recognition from local scale-invariant features.
    International Conference on Computer Vision, Corfu, Greece, pp. 1150-1157.
15. Lowe D. G., (2004): Distinctive image features from scale-invariant keypoints.
    International Journal of Computer Vision, 60(2):91-110.
16. Noviyanto, A., Arymurthy, A. M. (2013): Beef cattle identification based on
    muzzle pattern using a matching refinement technique in the SIFT method.
    Computers and Electronics in Agriculture, v.99, p.77-84, 2013.




                                         803
17. Parker, J. R. (2011): Algorithms for Image Processing and Computer Vision.
    Wiley Publishing, Indianapolis, ISBN 978–0–470–64385–3.
18. Wiggers, E. P., Beckerman, S. F. (1993): Use of thermal infrared sensing to
    survey white-tailed deer populations. Wildlife Society Bulletin 21: 263-268.
19. Wyatt, C. L., Trivedi, M., Anderson, D. R. (1980): Statistical evaluation of
    remotely sensed thermal data for deer census. J. Wildlife Manage. 44: 397-402.
20. Yu, X., Wang, J., Kays, R., Jansen, P.A., Wang, T., Huang, T. (2013). Automated
    identification of animal species in camera trap images. EURASIP Journal on
    Image and Video Processing 2013: 52.




                                         804