=Paper= {{Paper |id=Vol-2588/paper6 |storemode=property |title=Automated Complex for Aerial Reconnaissance Tasks in Modern Armed Conflicts |pdfUrl=https://ceur-ws.org/Vol-2588/paper6.pdf |volume=Vol-2588 |authors=Pylyp Prystavka,Vladyslav Sorokopud,Artem Chyrkov,Vladyslav Kovtun |dblpUrl=https://dblp.org/rec/conf/cmigin/PrystavkaSCK19 }} ==Automated Complex for Aerial Reconnaissance Tasks in Modern Armed Conflicts== https://ceur-ws.org/Vol-2588/paper6.pdf
 Automated Complex for Aerial Reconnaissance Tasks in
              Modern Armed Conflicts

        Pylyp Prystavka [0000-0002-0360-2459], Vladyslav Sorokopud [0000-0002-3256-7031],
        Artem Chyrkov [0000-0001-6582-8018] and Vladyslav Kovtun 6 [0000-0002-1408-5805]

                     National Aviation University, Kyiv, Ukraine
           chindakor37@gmail.com, vlad.sorokopud.i@gmail.com,
                           a.chyrkov@nau.edu.ua



        Abstract. In military combat missions and/or rescue operations intelligence da-
        ta have a significant role. Last time using unmanned aircraft vehicles (UAV) is
        an effective way for its obtaining. As practice shows, the main aerial intelli-
        gence tasks are object detection and object tracking, and these tasks are desira-
        ble for automation. Existing UAVs either have no admitted task automation
        functionality at all or have particular functionality for civil purposes. This pub-
        lication describes automated complex with implemented potentially interesting
        objects search and the object-of-interest tracking functionality.

        Keywords: aerial reconnaissance, object detection, object tracking, unmanned
        aerial vehicle.


1       Introduction

In modern armed conflicts, unmanned aerial vehicles (UAVs) have received active
use at a tactical level, while their share is increasing annually. This trend is main-
tained due to the high efficiency of aerial reconnaissance missions [37].
   As noted in [41], one of the basic requirements for cyber intelligence systems is the
automation of typical tasks. Moreover, in practice, typical tasks are to search for sus-
picious (potentially dangerous) objects of a significant number of classes, and to mon-
itor the target(s) object(s). So, automation of such tasks is an important issue.

    1.1. Overview of existing UAVs

Since the first half of the 2010s, there has been an active introduction of the UAV to
solve a wide range of practical problems, primarily the agro-industrial sector, the
security sector, as well as specialized dual-use and military systems. Moreover, the
analysis of technologies, methods and tools (TMT) used in the UAV is not presented
in open scientific publications. This is due to the fact that: 1) some TMT is military
equipment or dual-use products, or 2) the vast majority of UHF are produced by pri-
vate companies, therefore TMT used in them is a trade secret. So, to understand the
current state of the industry, instead of analyzing scientific publications, it is neces-

    Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attrib-
ution 4.0 International (CC BY 4.0) CMiGIN-2019: International Workshop on Conflict Management in
Global Information Networks.
sary to analyze the open documentation of the UAV, the websites of manufacturers or
sellers, relevant specialized exhibitions, and the like. The following is an overview of
the UAV present in the Ukrainian market.
   The most famous battle control system (of which the reconnaissance UAV is a
part) developed in Ukraine is Combat Vision [1]. In this system, the aircraft is used
for aerial reconnaissance, in particular the mapping of objects of interest on a map.
Moreover, video analysis is not automated and is performed by the operator in a com-
pletely manual mode.
   One of the examples of UAV with the implemented functionality of automatic (au-
tomated) monitoring, search, identification of objects of interest is UAV Micro C-
UAS WARMATE (Poland) [2]. A licensed copy of this complex is available on the
Ukrainian market (as noted in [3] and on the websites of news agencies). It is de-
signed to defeat the enemy by the method of self-destruction ("aircraft-shell"), that is,
not reconnaissance. Other problems - fixed in foreign currency, high price, depend-
ence on a foreign manufacturer. In addition, an analysis of specialized exhibitions
showed that a licensed copy available on the Ukrainian market has limitations in
terms of automatic search for objects.
   In tactical-level combat control systems, in particular [4]–[6], aircraft are used to
shoot video and then watch it by the operator (similar to [1]), or as transport media for
solving various technical problems.
   Ukrainian companies ([7]–[8] and others) are engaged in the sale of various types
of UAVs for solving a wide range of tasks, but there is no list of tasks to automate
video processing from the camera.
   The UAV, adopted by the Armed Forces of Ukraine - A1 SM Fury [9], DeViRo
Leleka-100 [10] and others - contain LA, which is a video camera carrier, without the
possibility of automatic (automated) processing video.
   "Blowfish A2" (China) [11] is a UAV with the ability to automatically perform
certain tasks. Moreover, from the analysis of open sources it follows that the func-
tionality of the automatic (automated) search for objects by video is missing.
   At the end of 2018, an agreement was signed on the supply of the Bayraktar TB2
UAV (Turkey) for the Ukrainian Armed Forces. From the analysis of open sources it
follows that the functionality of automatic (automated) search for objects by video is
also absent, even in the form of additional modules.
   An example of a UAV for automatically searching for targets by video is Project
Maven (developed by Google by order of the US Department of Defense), which
features the use of neural networks, which requires a significant amount of training
samples, the formation of which is a separate labor-intensive and, in the case of the
military sphere, dangerous task.

  1.2. UAV with real-time tracking

   1. DJI Phantom 3 Standard
  DJI Phantom 3 delivers great performance with Follow Me on. The drone uses
GPS / GLONASS to accurately determine the position in Follow Me mode. Thanks to
this system, the quadrocopter will freeze in the same position in the air, and the cam-
era will monitor the object [32].
   Follow Me is just one of the interesting features of DJI Phantom 3. From the quad-
copter, you will also get smart modes such as Waypoints (a function that records a
specific flight path), Point of Interest (the drone automatically rotates around the ob-
ject), Home Lock (control carried out in relation to the position of the pilot) and head-
ing lock (all flight controls are locked relative to the current heading).
     2. 3DR Solo
   Solo has several intelligent control modes: Follow Me, Orbit, Cable Cam and
Selfie.
   Intelligent computer system allows you to change the angle, distance and perspec-
tive during the flight of the drone [33]. The result is a more detailed video that has
smooth transitions and movement.
     3. 3DR IRIS+
   IRIS + has an enhanced Follow Me mode, and all settings can be configured using
the tablet. In addition to following the subject, intelligent technology controls the
suspension to reduce jitter and take clear photos and videos [34].
   One small drawback should be remembered: in the drone there is no way to over-
come obstacles. When using the "Follow me" mode, the selected course should be
relatively free from high obstacles that the drone could potentially encounter.
     4. Hubsan H501S X4
   The built-in GPS system allows the quadrocopter to track the object [35]. This
function can be enabled when both the quadrocopter and the transmitter are synchro-
nized with at least six satellites. This safety measure is necessary for the correct posi-
tioning of the quadrocopter.
     5. Ehang Ghost Drone 2.0
   The Follow Me feature is based on GPS positioning. Ghost 2.0 does not have an
obstacle sensor: it is better to use it in an open area. The so-called G BOX is included
in the package of delivery; the subject must be worn constantly to ensure more accu-
rate tracking [36].
   If you set the night mode, it will turn on the LED lights so you can better track it.

   All the above UAV models have a common drawback, namely a high price and a
closed code, which makes it impossible to use these models in search and exploration
systems.

  1.3. Overview of Existing Computer Vision Techniques

Since 2012, convolutional neural networks (CNNs) have reached an practically ac-
ceptable level [12]. Their advantages are high search accuracy, low error types I and
II, sufficient speed for a wide range of practical tasks when running on the x86_64
hardware in the presence of a GPU. The disadvantages are: 1) the need for a training
sample of a significant amount; 2) low speed when running on hardware other than
the above; 3) significant time spent on the training phase – e.g. in [13] the declared
CNN training time is 14 hours, in [14] is about 240 hours.
   The disadvantage (1) is solved by methods specific to each specific subject area.
The solution to the deficiency (2) is one of the main directions of scientific research in
this area; for example, possible ways to increase the speed of CNN are given in [15]–
[16], but there is currently no radical way to solve this problem.
   The results of analysis of other existing classes of methods for automatically
searching for objects on video are as follows (taken from [17]). The Template Match-
ing methods also require a reference base; in addition, some methods [18]–[19] are
not robust to noise, and some [20] do not provide sufficient performance. The ap-
proaches based on singular points and the classifier [21] require a training set and
have a preliminary training stage. The use of segmentation methods [22]–[26] require
manual setting of parameters; in addition, some of them have unacceptably low speed,
some do not give acceptable results in noisy video frames or in textured areas. The
class of Active Shape methods [27], statistical recognition methods [28], and methods
based on imitating the emphasis of a person [29] also need a training sample and have
a preliminary training stage.
   In [30], a description is given of an experimental sample of an automated target
search system using UAVs, which provides the automatic generation of a list of sus-
picious objects, and the selection of objects (s) of interest. The core of this system is
actually the method [31] for automatically searching for suspicious objects on video
from a UAV camera, which is designed to search for objects that are distinguished
against the background and are not often found, do not require a training sample for
training (self-learning on the first frames of the video) [31].

  1.4. Formulation of the problem

Based on the foregoing, the purpose of this publication is to develop an automated
complex based on an experimental sample [30] that would implement the search for
potentially interesting objects, as well as the object-of-interest tracking.

2. Automated Complex Description

System requirements. AU, being developed, should have the following functionali-
ty:
    • flight in research mode;
    • synchronization with onboard autopilot;
    • search for potentially suspicious objects in automatic mode;
    • selection by the operator of objects of interest of an automatically generated list;
    • tracking objects selected by the operator
    Since the AS presented in this publication is based on AS [30], a brief description
is given below with [30].
    Hardware Technology The hardware of the AS consists of UAVs with target
equipment on board, a ground station (NS) and a communication channel between
them (Fig. 1).
                        GPS                     Autopilot     Camera




                                               Raspberry Pi                       Client app


                 UAV

                               Fig. 1. Composition of an automated system

     The target equipment on board the UAV in case of a problem, it is decided, is a
video camera and a device for processing data. The last one is a single-board comput-
er (OK). In the publication [37] for the class of similar tasks, it is recommended to use
OK Raspberry Pi 3 Model B or DragonBoard 410c.
     It is proposed to use a laptop with the following minimum characteristics as an
NS for processing data from UAVs and controlling flight mission or control: Intel
Core i5 processor, 8 GB of RAM.
     The functional diagram of the speakers is shown in Fig. 2.


                       Coordinates

                                                                         Wifi
                 GPS             Autopilot          Camera

                                                                       Ethernet
                                Sync/Control
                                                                                           Client app
                                                     Video
                               Raspberry Pi                             Radio




           UAV

                                     Fig. 2 Functional diagram of the speaker

   Establishing high-quality communication between UAVs and emergency situations
is a difficult problem, since all channels can be jammed or intercepted. The choice of
the type of communication channel is a separate task that is beyond the scope of this
publication.
   Search for potentially dangerous objects. To automatically search for suspicious
(potentially dangerous) objects on board the UAV, a software implementation of the
method is used [31].
   The coordinates of suspicious (potentially dangerous) objects are determined on
the basis of [38] or [39].
   The structure of an automated system. The principle of the system as a whole is
as follows: the client part (UAV with OK) processes the streaming video from the
camera, determines suspicious objects, receives telemetry parameters - GPS coordi-
nates, yaw angles, pitch, roll of the UAV - from the autopilot subsystem and sends it
to the server (NA). The server part allows the operator to view the objects that were
received from the UAV, the operator has the opportunity to select a specific object
and send information about it back to the UAV, after which the UAV changes the
flight mode: it switches to the search mode selected by the operator of the object.
When the UAV finds it selected it begins to follow it [30]. The schematic principle of
the system is shown in Fig. 3.


   ·       Send objects
   ·       Send UAVcoordinates                    Autopilot                               Camera




                                            ·   Get UAV coordinates              ·   Get image



                                                                                             ·     Detect objects
                                                                                             ·     Track object

                    Client app                                        Raspberry Pi

                                          UAV



       ·    Send target object
       ·    Set flight mode


            Fig. 3. The structure of the complex: the relationship between the elements

   Thus, the tasks of the client side of the AS are: processing video from a UAV,
searching for suspicious objects, searching for a target, tracking the target. Tasks of
the AS server side: displaying suspicious objects, organizing the possibilities of se-
lecting a specific object, changing the parameters of the algorithm for searching for
suspicious objects, determining the coordinates of the UAV camera’s viewing area,
displaying a UAV flight map [30].
   In order to protect data from unauthorized viewing, it makes sense to encrypt it.
Data encryption is also a separate task that deserves a separate publication.
   User interface. One of the main tasks of the server part (NS) is the interaction with
the user (operator). For its organization, a graphical interface was developed as part of
the server application. The structure of user interaction and the graphical interface is
shown in Fig.4.
                                                                           Main window

                                                                             Objects          Settings       Selected object


                                                                                 Operations

                                                                                 View objects

                                                                                 Select object
                             User




                                                             Main window

                  Objects       Selected object   Settings      Settings            Objects              Selected object


                  Operations                                      Operations
                  View selected object                            Set settings
                  Detect object coordinates                       Send settings to UAV
                  Send object to UAV




           Fig. 4. The structure of user interaction and graphical interface [30]

   The procedure for working with the system is as follows:
   1. Set up a Wi-Fi communication channel.
   2. Run the server application on the emergency. Find out the IP address of the
emergency.
   3. Run the client application on the UAV. At startup, specify the IP address of the
emergency.
   4. Launch the UAV along the necessary path.
   5. During the flight, view suspicious objects that the AS sends from the UAV to
the emergency at a periodicity. If necessary, change the settings (algorithm parame-
ters).
   6. If an object of interest appears in the list, select it and instruct the UAV to per-
form automatic actions in relation to it (for example, put the UAV in automatic track-
ing mode) [30].
   7. In the mode of automatic tracking of the target, monitor the emergency process.
In case of loss of the target object from focus, re-capture it and restart tracking mode.
   8. After completing the task, landing the UAV, shutting down the client and server
applications.

   Automatic UAV control. A single-board computer can send commands to the au-
topilot in automatic mode, for this the MavSDK library is used.
   In fig. 5 shows the communication scheme.


      Autopilot                               MavSDK                                  Mavlink                                  Client app
                             Fig. 5. Communication scheme
  Target tracking. The developed system implements an algorithm for searching
and tracking targets. When the operator selects the object, the UAV returns to the
coordinates of this object and starts the search algorithm, if the object was found, the
UAV begins the tracking process. In fig. 6 shows an example of target tracking.




                           Fig. 6. An example of target tracking

   The choice of the mathematical method of tracking is carried out taking into ac-
count the features of the camera on board the UAV and the features of the target ob-
jects from the point of view of computer vision. An example of a comparative analy-
sis of trackers by the specified criteria is given in [40].
   The test results of the developed AS at the test site showed high-quality results at a
low speed of movement of the target object and a relatively simple background.

3. Conclusions

Based on the experimental sample [30], an automated complex is implemented that
provides the automatic generation of a list of suspicious (potentially dangerous) ob-
jects, the selection of an object of interest from it, and tracking of the selected object.
The system was tested on test objects at the test site. The test results are positive at a
low speed of movement of the target object and a relatively simple background.

Further research can be aimed at calibrating the developed system by choosing the
best tracking method, optimizing the UAV control process.

References

 1. ComBat Vision, https://combat.vision/, last accessed 2019/10/16.
 2. Warmate | WB Electronics, http://wb.com.pl/warmate-en/?lang=en, last accessed
    2019/10/16.
 3. WB Electronics Warmate — Wikipedia,
    https://en.wikipedia.org/wiki/WB_Electronics_Warmate, last accessed 2019/10/16.
 4. Defence       Software        |    Command        and      Control     |    Interoperability,
    https://systematic.com/defence/, last accessed 2019/10/16.
 5. Warfighter Information Network-Tactical (WIN-T) - General Dynamics Mission Systems,
    https://gdmissionsystems.com/c4isr/warfighter-information-network-tactical-win-t/,       last
    accessed 2019/10/16.
 6. Force       XXI       Battle      Command        Brigade      and      Below      (FBCB2),
    https://pdfs.semanticscholar.org/cb40/aeb3ca16cec0a0008c21409ebb9ff008084c.pdf, last
    accessed 2019/10/16.
 7. Matrix UAV, http://muav.com.ua/, last accessed 2019/10/16.
 8. Long-range multirotor UAVs | Tethered drones | Aeromagnetic survey, https://umt.aero/,
    last accessed 2019/10/16.
 9. A1 SM “Furia”,
    https://www.facebook.com/media/set/?set=a.1620187274919672.1073741836.143418115
    0186953&type=3, last accessed 2019/10/16. (in Ukrainian)
10. Unmanned Aerial System DEVIRO “Leleka-100”, http://deviro.ua/leleka-100, last ac-
    cessed 2019/10/16. (in Ukrainian)
11. Blowfish A2, http://ziyanuav.com/blowfish2.html, last accessed 2019/10/16.
12. Krizhevsky, A., Sutskever, I., Hinton, G.E.: “Imagenet classification with deep convolu-
    tional neural networks”. In: Advances in neural information processing systems, NIPS,
    Lake Tahoe, Nevada, USA, 2012, pp. 1097-1105. doi:10.1145/3065386.
13. Majumder, S., Balaji, N., Brey, K., Fu, W., Menzies, T.: “500+ Times Faster Than Deep
    Learning (A Case Study Exploring Faster Methods for Text Mining StackOverflow)”, May
    2018 [online]. https://arxiv.org/pdf/1802.05319.pdf
14. Gu, X., Zhang, H., Zhang, D., Kim, S.: “DeepAPI learning”. In: Proceedings of the 2016
    24th ACM SIGSOFT International Symposium on Foundations of Software Engineering,
    2016, pp. 631-642.
15. Zhang, X., Zou, J., He, K., Sun, J.: “Accelerating Very Deep Convolutional Networks for
    Classification and Detection”, Nov 2015 [online]. https://arxiv.org/abs/1505.06798
16. Accelerating Convolutional Neural Networks on Raspberry Pi, http://cv-
    tricks.com/artificial-intelligence/deep-learning/accelerating-convolutional-neural-
    networks-on-raspberry-pi/, las accessed 2019/10/16.
17. Prystavka, P., His, D., Chyrkov, A.: “Technique for Automated Target Object Search in
    Video Stream from UAV in Post-Processing Mode”. Ukrainian Information Security Re-
    search Journal, 2(21), 97–103 (2019). (in Ukrainian)
18. Lowe, D.G.: “Object recognition from local scale-invariant features”, In: Proc. of the 7th
    IEEE International Conference on Computer Vision, Greece, September 1999.
    https://dx.doi.org/10.1109%2FICCV.1999.790410.
19. Bay, H., Tuytelaars, T., Gool, L.V.: “SURF: Speeded Up Robust Features”, Computer Vi-
    sion – ECCV 2006. Lecture Notes in Computer Science, vol 3951, 2006.
20. Alsaade, F.: “Fast and Accurate Template Matching Algorithm Based on Image Pyramid
    and Sum of Absolute Difference Similarity Measure”, Research Journal of Information
    Technology, vol. 4, issue 4, pp 204-211, 2012. http://dx.doi.org/10.3923/rjit.2012.204.211.
21. Viola, P., Jones, M., “Rapid object detection using a boosted cascade of simple features”.
    In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and
    Pattern Recognition. CVPR 2001, USA, 2001. doi:10.1109/CVPR.2001.990517
22. Otsu, N., “A Threshold Selection Method from Gray-Level Histograms”. In: IEEE Trans-
    actions on Systems, Man and Cybernetics, vol 9, issue 1, pp 62-66, 1979.
23. Canny, J., “A Computational Approach to Edge Detection”, In: IEEE Transactions on Pat-
    tern Analysis and Machine Intelligence, volume PAMI-8, issue 6, pp 679-698, Nov. 1986.
    https://doi.org/10.1109/TPAMI.1986.4767851.
24. Lloyd, S.P.: “Least squares quantization in PCM”, In: IEEE Transactions on Information
    Theory, vol. 28, issue 2, pp 129-137, March 1982.
25. Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: “DeepLab: Semantic
    Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Con-
    nected CRFs”, Submitted on June 2016, last revised on May 2017.
26. Papandreou, G., Chen, L.-C., Murphy, K., Yuille, A.L.: “Weakly- and Semi-Supervised
    Learning of a DCNN for Semantic Image Segmentation”, Submitted on Feb 2015, last re-
    vised on Oct 2015. https://arxiv.org/abs/1502.02734.
27. Cootes, T.F., Taylor, C.J., Cooper, D.H., Graham, J.: “Active Shape Models – Their Train-
    ing and Application”, Computer Vision and Image Understanding, vol 61, issue 1, pp 38-
    59, January 1995. https://doi.org/10.1006/cviu.1995.1004.
28. Gorelik, A., Skripkin, V.: Object Recognition Methods: A Studentbook. 2nd edn. Higher
    School, Moscow, USSR, (1984). (in Russian)
29. Gotovac, S., Papić, V., Marušić, Ž.: “Analysis of saliency object detection algorithms for
    search and rescue operations”, In: 2016 24th International Conference on Software, Tele-
    communications and Computer Networks (SoftCOM), Split, Croatia, September 2016.
30. Prystavka, P., Sorokopud, V., Chyrkov, A.: “Experimental Version of Automated System
    for Suspicious Objects Search on Video Stream from Unmanned Aircraft”. Weapons Sys-
    tems and Military Equipment, 2(50), 26–32 (2017). (in Ukrainian)
31. Chyrkov, A., Prystavka, P.: “Suspicious Object Search in Airborne Camera Video
    Stream”. In: Hu Z. et al. (eds) Advances in Computer Science for Engineering and Educa-
    tion. ICCSEEA 2018. Advances in Intelligent Systems and Computing, vol 754. Springer,
    Cham, Switzerland, 340–348 (2018). doi:10.1007/978-3-319-91008-6_34
32. Phantom                                      |                                  DroneUA,
    http://store.drone.ua/phantom/?gclid=Cj0KCQjwrrXtBRCKARIsAMbU6bHSc1ZuyG0bj
    mX2cK7lmPweWAnmqHW9j_W1ejf1QIwQ2j9St-173-gaAtm-EALw_wcB, last accessed
    2019/10/16
33. Quadrocopter 3DR Solo + gimbal do Hero 3/4, https://led-expert.in.ua/p298730905-
    kvadrokopter-3dr-solo.html, last accessed 2019/10/16
34. 3DR IRIS+, http://quadrocopter.ua/quadrocopters_copters/3DR-IRIS
35. Hubsan       H501S      High    Edition,    https://hubsan.in.ua/product/hubsan-h501s-pro-
    x4/?gclid=Cj0KCQjwrrXtBRCKARIsAMbU6bFi8WuhgyuEYEKa5obtSG77hlvDiJdXrEk
    75pPgH7nc18eO2KHFMmgaAiFwEALw_wcB, last accessed 2019/10/16
36. Ehang GhostDrone 2.0, https://ek.ua/EHANG-GHOSTDRONE-2-0.htm, last accessed
    2019/10/16
37. Shevchenko, A., “Comparative Analysis of Microcomputers for Data Processing”. In:
    MPZIS 2016, Dnipropetrovsk, Ukraine (2016). (in Ukrainian)
38. Nichikov, E., Chyrkov, A.: “Information Technology of UAV Camera Field of Calcula-
    tion”. Problems of Informatizzation and Control, 4(52), 106–112 (2015). (in Ukrainian)
39. Buryi, P., Prystavka, P., Sushko, V.: “Automatic Definition the Field of View of Camera
    of Unmanned Aerial Vehicle”. Science-Based Technologies, 2(30), 151–155 (2016).
40. Chyrkov, A.: “Comparative Analysis of Object Tracking Methods in Video Stream from
    UAV Camera”. Problems of Informatizzation and Control, 1(53), 78–82 (2016). (in
    Ukrainian)
41. Hryshchuk, R., Danyk, Yu.: Fundamentals of Cybersecurity. ZhNAEU, Zhytomyr,
    Ukraine (2016). (in Ukrainian)