<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Neural network approach to 5G digital modulation recognition</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Bohdan Kotyk</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Denys Bakhtiiarov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksandr Lavrynenko</string-name>
          <email>oleksandrlavrynenko@tks.nau.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bohdan</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National Aviation University</institution>
          ,
          <addr-line>Liubomyra Huzara Ave. 1, Kyiv, 03058</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>State Scientific and Research Institute of Cybersecurity Technologies and Information Protection</institution>
          ,
          <addr-line>Maksym Zalizniak Str., 3/6, Kyiv, 03142</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In digital communication, correct identification of the modulation types used is key to ensure reliable processing of received signals. This work presents results for the task of digital modulation type identification, assuming perfect receiver synchronization has already been achieved. A multilayer perceptron has been created and adjusted with the help of the Adam optimization algorithm towards lowering classification errors along with shared cumulants, which appeared to be great informative features in modulation type determination. Also, after results was collected, the analysis was carried out on how hidden layers affect the performance of the neural network; it was found that a three-layer perceptron gives excellent accuracy. Results show that at a signal-to-noise ratio (SNR) of 5 dB, recognition accuracy is about 99%. This methodology gives a safe pathway for correct modulation classification and future research to improve the model's capability to estimate SNR and cumulant selection refining for better classification. In future work it is planned that model will be able to proceed uncertain input parameters to further enhance the system adaptability and effectiveness in real life scenarios.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;machine learning</kwd>
        <kwd>neural network</kwd>
        <kwd>5G</kwd>
        <kwd>high order cumulants</kwd>
        <kwd>digital modulation</kwd>
        <kwd>ReLU</kwd>
        <kwd>Softmax</kwd>
        <kwd>Adam</kwd>
        <kwd>backpropagation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        training a neural network is seen as an incremental activity that enables the network to acquire
pattern recognition capabilities with respect to data by modifying the weights of synaptic
connections among its neurons [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>One of the most well-known types of ANNs is a multilayer neural network, which is a multilayer
perceptron (MLP), shown in Figure 1, which has several layers, each of which consists of many
neurons.</p>
      <p>
        A neural network consists of several or many layers, and therefore a large number of neurons. A
neuron is a communication device that has input and output connections. It’s function consists of
two parts: the first is to calculate the discriminant function df, and the second is activation function
act(j,i), as shown in Figure 2 [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>The following values are used in the figure: in - input information features, out - output values of
the neuron, w' - weight coefficients (synapse), n - number of input information features.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Basic technological stages</title>
      <p>
        For our neural model to be successful in determining modulation type we need to choose its
parameters, such as: activation and classification functions, backpropagation algorithm and
information features for determining modulation to teach our model [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <sec id="sec-2-1">
        <title>2.1. Activation function</title>
        <p>
          The discriminant function df is a weighted sum of the input signals in the input layer, this sum for a
neuron is expressed as [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]:
=
( , ) =
( , ) +
        </p>
        <p>( , )
ℎ =
%(ℎ,*) = ReLU( ℎ) = "
(ℎ,*) = ∑,
(ℎ,*) %(ℎ,* ),
( 0, ℎ).
where ( , ) =
( , ), ( , ), … ,
( , ) , = (1,
)⃗– row vector of synaptic connections for the
jth neuron, N1 – number of neurons in the input layer, n – number of input features, xi – row vector
of the i-th input.</p>
        <p>
          The activation function in a neural network greatly influences the output of a neuron given its
input. The activation function allows for making a neuron's decision to activate and propagate
information or to stay quiescent. There are many different types of activation functions that can be
applied to neural networks, each with its own pros and cons [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. The following functions have been
used:
1. Linear Activation Function: It simply returns the input value without any changes. The
function looks like this [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]:
        </p>
        <p>( ) = .
2. Sigmoid activation function. Function formula:</p>
        <p>( ) = 1/(1 + ( )). (3)
Converts any number to a value between 0 and 1, making it useful for calculating
probabilities.
3. Hyperbolic tangent (tanh): Function formula:</p>
        <p>( ) = ( − ( ))/( + ( )). (4)</p>
        <p>
          This is another S-shaped function. It converts numbers to values in the range [
          <xref ref-type="bibr" rid="ref1">-1, 1</xref>
          ].
4. ReLU (Rectified Linear Unit): Function formula:
        </p>
        <p>( ) = " (0, ). (5)</p>
        <p>
          In this article, the activation function used was “ReLU”. A simple nonlinear function, ReLU
transforms the input signal by setting all negative values to zero and keeping positive values
unchanged. In this case, the output of each neuron is defined as follows [
          <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
          ]:
        </p>
        <p>%( , ) = ReLU( ) = " ( 0, ), (6)</p>
        <p>
          In the hidden layer, each neuron processes a weighted sum of its’ inputs, which can be
represented by the following expression [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]:
(1)
(2)
(7)
(8)
(10)
where (.,*) – row vector of synaptic connections at the input of the h-th neuron of the m-th hidden
layer, m – number of hidden layers, Nm – the number of neurons in the m-th hidden layer.
        </p>
        <p>In the output layer, each neuron processes a weighted sum of its inputs:</p>
        <p>(/ , ) = ∑, 0 (/ , ) %(/ ,*), (9)
/ =
where (/, ) is row vector of synaptic connections at the input of the k-th neuron, / – number of
neurons in output layer.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Classification function</title>
        <p>
          To determine the type of modulation, a classification function called the Softmax function is used. It
transforms a set of input values into a set of positive numbers whose sum is 1 [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. Thus, each number
in the output vector represents the probability of the corresponding class. In this case, the Softmax
function is represented as follows [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]:
        </p>
        <p>%(/ , ) = ( / ) = 120 / ∑,/ 0 120 ,
where o(k, 1) is the prediction of the k-th neuron of the output layer.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Backpropagation algorithm</title>
        <p>
          The neural network training is based on the backpropagation algorithm. In training process, the
neural network first makes estimates based on input data and then compares those estimates with
the correct answers to derive error [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
        </p>
        <p>
          This comparison leads to the calculation of error, which is subsequently propagated back through
the network to adjust weights to minimize this error. The utility of this error function can be
expressed as follows [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]:
        </p>
        <p>3 (4⃗) = 0.5 ∑,/ 0 67(/ , ) − 8(/ , )9:, (11)
where μ(k,1), ν(k,1) - the desired and predicted state of the neural network outputs of the k-th neuron of
the output layer, Nk – number of neurons in output layer</p>
        <p>Now, many methods exist to minimize the error function but four are most frequently used:
Gradient method, Adagrad method, RMSProp method, and Adam method which excel in the pace
algorithm evolution pace and more common in the literature.</p>
        <p>1. Gradient descent method.</p>
        <p>The concept of gradient descent is to perpetually modify the network weights in the direction
opposed to the loss gradient [11]. The gradient indicates the direction of the highest increase of the
function, so and move in the opposite direction helps to lessen the function. The formula applies for
an update of weights and done once per iteration of gradient descent
( + 1) = ( ) + ; , (12)
; = −&lt; ⋅ &gt;3 (4⃗), (13)
&gt;
where wi(t) and wi(t+1) are the previous and updated values of the weight coefficient of the i-th
neuron, λ is the learning rate [12].</p>
        <p>One big drawback of this method is that local minima and saddle points are found within the loss
function. If the loss function has many local minima or saddle points, gradient descent might get
stuck at one of these places and won't be able to reach the global minimum.</p>
        <p>2. Adagrad method.</p>
        <p>Adagrad adjusts the global learning rate for each parameter based on the gradient changes for
that particular parameter. It reduces the learning rate for parameters that receive frequent updates
and raises the learning rate for one that receive infrequent updates. This contrast with gradient
method as update formula of Adagrad includes sum of squared gradient Gt in denominator. If a
parameter has associations with a chain of normally active neurons, it reset frequently; hence, the
overall sum accumulates quickly. Update formula can then be expressed as:
( + 1) = ( ) − &lt; ⋅ &gt;3 (4⃗), (14)</p>
        <p>?@A + B &gt;
where ε is a small value to prevent division by zero (typically around 10⁻⁸).</p>
        <p>Because of its adaptive nature Adagrad is less prone to manual errors than standard gradient
descent. The accumulation of squared gradients in G is not bounded so it can make a very small
denominator in the update formula and thus updates small. This can cause the algorithm to stop too
early leading to a poor fit. Moreover, Adagrad does not have an explicit notion of moment as other
optimization methods do like SGD with moment or Adam [13]. This could lead to early stoppage of
algorithm when it's not properly trained yet.</p>
        <p>3. RMSProp method.</p>
        <p>This is a heuristic optimization method proposed by Geoff Hinton. It was conceived as an
improvement on Adagrad to solve its gradient accumulation problem. Instead of storing all past
squared gradients, as Adagrad does, RMSProp uses a running average of past squared gradients
E[g2]t-1 that decays exponentially. This makes it robust to the learning rate decay problem. The
exponentially decaying running average at time t is defined as [14]:
:
3 CD:EA = F3 CD:EA + (1 − F) G&gt;3 (4⃗)H , (15)
&gt;
where γ is the conservation coefficient in the range from zero to one. The closer γ is to one, the
larger the accumulation window and the more intense the smoothing of the square of the gradient,
i.e., a moving average is determined that decreases exponentially. In practice, the value y = 0.9 is
usually used.</p>
        <p>Then the update formula for RMSProp looks like this:
:
( + 1) = ( ) − &lt; ⋅ G&gt;3 (4⃗)H . (16)</p>
        <p>?3 CD:EA + B &gt;</p>
        <p>RMSProp fixes the problem of Adagrad's slow learning rate by taking a moving average of the
squared gradient, which decays exponentially. Like Adagrad, however, RMSProp does not have a
formal notion of "moment," which can hurt its performance when collinear trends are present [15].
4. Adam method.</p>
        <p>Currently, the most effective approach to optimize the neural network training process is the
Adam method, proposed in the mid-2010s. It combines the concepts of RMSProp and SGD with
momentum. The Adam method differs from other methods in that we do not store the ∆ wi value,
but instead the average gradient value, using two stores: one for the moving average of the gradient
values (e.g., SGD with momentum), and the other for the moving average of the squared gradient
values (e.g., RMSProp). The working principles of these two accumulators are described by the
following formulas [16]:
(17)
(19)
(20)</p>
        <p>JA = I:JA + (1 − I:) G&gt;3&gt;(4⃗)H , (18)
where I and I: are the parameters for exponentially weighted averages with weights of 0.9 and
0.999, respectively. One important difference is the initial value of "A and JA, since if they are initially
set to zero, they will take a long time to accumulate. To overcome this problem, special adjustments
are used for them. These adjustments are defined as follows [17]:</p>
        <p>"A JA
Therefore, the update formula is:
"A = I "A
+ (1 − I )
&gt;3 (4⃗)</p>
        <p>,
&gt;</p>
        <p>:
"A = 1 − IA , JKA = 1 − I:A</p>
        <p>,
( + 1) =
( ) −</p>
        <p>&lt;</p>
        <p>Figure 4 shows the results of the neural network simulation with the four error minimization
methods listed above; this figure allows us to estimate the learning rate associated with each method.
The figure demonstrates that the Adam method has a high degree of accuracy in recognizing various
types of digital manipulation (with a limited number of training cycles), it also has greater stability
in the network training process. As a result, the Adam method was used to minimize the error rate
as much as possible.
2.4. Information features for determining modulation
As defining characteristics for training neural model to distinguish modulations were chosen
cumulants of two-dimension random processes and mixed moments that corelate to them.</p>
        <p>Cumulants are insensitive to the addition of Gaussian white noise. In practice, the carrier
frequency at reception may not be accurately determined because of, among other things, natural
instabilities in the transmitter and receiver oscillator frequencies, Doppler shift, and timing error.
Since they are statistical measures of the signal, high order cumulants are therefore less susceptible
to these forms of distortion.</p>
        <p>Cumulants (semi-invariants) are special combinations of moments that have useful properties
such as additivity for independent random variables, where moment is usually defined as the
mathematical expectation of the product of the process values at different moments in time for a
stochastic stationary process. So, the moments of distribution or moments of a random variable φ
are called integrals [18–21]:</p>
        <p>∞
where 4 L ( ) is probability distribution density, n - moment order number.</p>
        <p>The cumulant of a random process is the expansion coefficient of the logarithm of the
characteristic function of a random process into a Taylor power series:</p>
        <p>∞
where n - is the cumulant order.</p>
        <p>3 L = M
∞</p>
        <p>4 L ( ) ,
NO P ( ) =</p>
        <p>Q (R )</p>
        <p>O!
(22)</p>
        <p>
          There is a relationship between the cumulant QO and the moment 3O of a random variable, but
the generating functions of the cumulants are difficult to calculate if we are not familiar with the
exact distribution of the signal. However, the formulas for their relationship under the same
distribution are known [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], we can use the expressions of cumulants features of two-dimensional
random processes Cn,m, through their mixed moments En,m up to the 9th order, from C2,0:
        </p>
        <p>Q:, = 3:, ,
to C5,4:</p>
        <p>QT,U = 3T,U − 103U,U3:, − 203U,W3 , − 63T,:3 ,: − 103U,:3W, − 403W,W3:, −
−303U,:3 ,: − 33T, 3 ,W − 3T, 3 ,: − 53 ,U3U, − 403:,W3W, − 603W,:3:,: −
−203U, 3 ,W − 3T, 3 ,U + 303 ,U3::, + 603:,W3 , 3:, + 1803:,U3 ,:3:, + (24)
+2403 ,W3 ,:3:, + 1203W,:3:, 3 , + 1113U, 3 , 3 ,: + 43T, 3 ,: +
+103U, 3 , + 23T, 3 ,: + 3003:,:3 ,:3:, + 603:,:3 , 3:, +
+2703 :,:3:, + 7203 , 3 ,:3:, − 3603 ,:3:, 3 ,: − 7203 , 3 ,:.</p>
        <p>Cumulants are complex numbers, and the main differences in their values for different types of
digital modulation can be seen in the real components of these numbers. For brevity, we will use the
term "cumulants" in what follows, referring only to their real parts.</p>
        <p>The initial data for calculating moments and cumulants are the complex signal ]/ ( ) = ^/ ( ) +
R_/ ( ) and its complex conjugate ]/ ( ) = ^/ ( ) − R_/ ( ), where ^/ ( ) - in-phase component , R_/ ( )
- quadrature component.</p>
        <p>Tables 1 and 2 show the values of the real part of the cumulant, selected as identification features,
in various types of modulation with a signal-to-noise ratio of 5 dB, up to the ninth order.
(23)
3. Process and result of neural model training
In a multilayer perceptron, hidden layers have great significance in both the functioning of the
network and training; these are placed between the input and output layers. The neurons in these
layers receive data from previous layers, process that data, and send it to other layers. Hidden layers
are for training purposes and gather features from the input data to be further processed in the
output layer for classification or prediction. This section determines how the number of layers
impacts the accuracy in recognizing various types of digital manipulations.</p>
        <p>Multilayer neural networks were implemented in Python using Colab Notepad. Multilayer neural
networks were implemented in Python using Colab Notepad with algorithm displayed on Figure 5.</p>
        <p>A dataset was prepared for the classification of digital modulations comprising 10,000 different
signals (1,000 signals for each type of modulation), where 7,200 are for training, 1,800 for validation,
and 1,000 for testing purposes. The results of the simulations are represented in Figure 7. It also can
be viewed in Figure 6 that the level of identification of types of digital modulation can be estimated
by the quantity of layers.</p>
        <p>Figure 7 show the identification of digital modulation at a signal-to-noise ratio of 5 dB for different
ANN structures. The numbers are presented in the form of a table, the rows and columns of which
correspond to the signal modulation type and the algorithm solution. The cells contain the results
that determine the modulation types. For example, at single-layer model: When identifying QAM-8
signals, all 100 signals involved in the computer experiment were correctly identified. When
identifying QAM-16 signals, 99 signals were correctly identified, but 1 signal was misidentified as
QAM-64.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Conclusions</title>
      <p>This article presents a study of digital modulation type recognition for ideal receiver
synchronization, i.e., with a known frequency and initial phase of the received signal. It is shown
why shared cumulants and moments are excellent to be used as information features for determining
modulation types. A multilayer perceptron structure is created that uses Adam as error minimization
function. Influence of the number of hidden layers of the neural system on the accuracy of digital
modulation type recognition is investigated. It is shown that a perceptron with three hidden layers
recognizes digital signal modulation types with excellent accuracy. For instance, with an SDN of 5
dB, the modulation detection accuracy was approximately 0.99.</p>
      <p>In future works, it is possible to train this model to determine SDN and further improve its
efficiency by choosing specific cumulants that best allow determining the modulation type and SDN.
It is also planned to improve the technique for determining the modulation type when the values of
the input parameters are not known exactly.</p>
    </sec>
    <sec id="sec-4">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.
Data science and security, volume 290 of Lecture Notes in Networks and Systems, Springer,
Singapore, 2021, pp. 168–175. doi: 10.1007/978-981-16-4486-3_18.
[11] J. S. Al-Azzeh, M. Al Hadidi, R. S. Odarchenko, S. Gnatyuk, Z. Shevchuk, Z. Hu, Analysis of
selfsimilar traffic models in computer networks, International Review on Modelling and
Simulations 10(5) (2017) 328–336. doi: 10.15866/iremos.v10i5.12009.
[12] O. Tachinina, O. Lysenko, I. Romanchenko, V. Novikov, I. Sushyn, Using Krotov’s functions for
the prompt synthesis trajectory of intelligent info-communication robot, in: M. Nechyporuk, V.
Pavlikov, D. Krytskyi, (Eds.), Information Technologies in the Design of Aerospace Engineering.
Studies in Systems, Decision and Control, volume 507, Springer, Cham, 2024. doi:
10.1007/9783-031-43579-9_6.
[13] V. Tkachuk, Y. Yechkalo, S. Semerikov, M. Kislova, Y. Hladyr, Using mobile ICT for online
learning during COVID-19 lockdown, Communications in Computer and Information Science,
1308 (2021) 46–67. doi: 10.1007/978-3-030-77592-6_3.
[14] O. Solomentsev, M. Zaliskyi, T. Herasymenko, O. Kozhokhina, Y. Petrova, Efficiency of
operational data processing for radio electronic equipment, Aviation 23 (3) (2020) 71-77. doi:
10.3846/aviation.2019.11849.
[15] O. Lavrynenko, R. Odarchenko, G. Konakhovych, A. Taranenko, D. Bakhtiiarov, T. Dyka,
Method of Semantic Coding of Speech Signals based on Empirical Wavelet Transform, in:
Proceedings of 2021 IEEE 4th International Conference on Advanced Information and
Communication Technologies (AICT), IEEE, Lviv, Ukraine, 2021, pp. 18–22. doi:
10.1109/AICT52120.2021.9628985.
[16] O. M. Tachinina, O. I. Lysenko, I. V. Alekseeva, Algorithm for operational optimization of
twostage hypersonic unmanned aerial vehicle branching path, in: Proceedings of 2018 IEEE 5th
International Conference on Methods and Systems of Navigation and Motion Control
(MSNMC), IEEE, Kiev, Ukraine, 2018, pp. 11–15. doi: 10.1109/MSNMC.2018.8576319.
[17] M. Zaliskyi, S. Migel, A. Osipchuk, D. Bakhtiiarov, Correlation Method of Dangerous Objects
Detection for Aviation Security Systems, CEUR Workshop Proceedings 3421 (2023) 1–11. URL:
https://ceur-ws.org/Vol-3421/paper1.pdf.
[18] O. Lavrynenko, D. Bakhtiiarov, V. Kurushkin, S. Zavhorodnii, V. Antonov, P. Stanko, A method
for extracting the semantic features of speech signal recognition based on empirical wavelet
transform, Radioelectronic and Computer Systems 3 (2023) 101–124. doi:
10.32620/reks.2023.3.09.
[19] N. S. Kuzmenko, I. V. Ostroumov, K. Marais, An accuracy and availability estimation of aircraft
positioning by navigational aids, in: Proceedings of 2018 IEEE 5th International Conference on
Methods and Systems of Navigation and Motion Control (MSNMC), IEEE, Kiev, Ukraine, 2018,
pp. 36-40. doi: 10.1109/MSNMC.2018.8576276.
[20] O. Solomentsev, M. Zaliskyi, O. Kozhokhina, T. Herasymenko, Efficiency of data processing for
UAV operation system, in: Proceedings of 4th International Conference Actual Problems of
Unmanned Aerial Vehicles Developments (APUAVD), IEEE, Kiev, Ukraine, 2017, pp. 27–31. doi:
10.1109/APUAVD.2017.8308769.
[21] O. Sushchenko, et al., Airborne sensor for measuring components of terrestrial magnetic field,
in: Proceedings of IEEE 41st International Conference on Electronics and Nanotechnology
(ELNANO), IEEE, Kyiv, Ukraine, 2022, pp. 687–691. doi: 10.1109/ELNANO54667.2022.9926760.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>O.</given-names>
            <surname>Holubnychyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Lavrynenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Bakhtiiarov</surname>
          </string-name>
          ,
          <article-title>Well-Adapted to Bounded Norms Predictive Model for Aviation Sensor Systems</article-title>
          , in: I. Ostroumov, M. Zaliskyi, (Eds.),
          <source>Proceedings of the International Workshop on Advances in Civil Aviation Systems Development. ACASD</source>
          <year>2023</year>
          , volume
          <volume>736</volume>
          <source>of Lecture Notes in Networks and Systems</source>
          , Springer, Cham,
          <year>2023</year>
          , pp.
          <fpage>179</fpage>
          -
          <lpage>193</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -38082-2_
          <fpage>14</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Jdid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. H.</given-names>
            <surname>Lim</surname>
          </string-name>
          , I. Dayoub,
          <string-name>
            <given-names>K.</given-names>
            <surname>Hassan</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. R. B. Mohamed Juhari</surname>
          </string-name>
          ,
          <article-title>Robust Automatic Modulation Recognition Through Joint Contribution of Hand-Crafted and Contextual Features, IEEE Access 9 (</article-title>
          <year>2021</year>
          )
          <fpage>104530</fpage>
          -
          <lpage>104546</lpage>
          . doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2021</year>
          .
          <volume>3099222</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Bakhtiiarov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Lavrynenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Lishchynovska</surname>
          </string-name>
          , I. Basiuk, T. Prykhodko,
          <article-title>Methods For Assessment And Forecasting Of Electromagnetic Radiation Levels In Urban Environments</article-title>
          . Informatyka, Automatyka,
          <source>Pomiary W Gospodarce I Ochronie Środowiska</source>
          <volume>11</volume>
          (
          <issue>1</issue>
          ) (
          <year>2021</year>
          )
          <fpage>24</fpage>
          -
          <lpage>27</lpage>
          . doi:
          <volume>10</volume>
          .35784/iapgos.2430.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R. S.</given-names>
            <surname>Odarchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. O.</given-names>
            <surname>Gnatyuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. O.</given-names>
            <surname>Zhmurko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O. P.</given-names>
            <surname>Tkalich</surname>
          </string-name>
          ,
          <article-title>Improved method of routing in UAV network</article-title>
          ,
          <source>in: Proceedings of International Conference Actual Problems of Unmanned Aerial Vehicles Developments (APUAVD)</source>
          , IEEE, Kyiv, Ukraine,
          <year>2015</year>
          , pp.
          <fpage>294</fpage>
          -
          <lpage>297</lpage>
          . doi:
          <volume>10</volume>
          .1109/APUAVD.
          <year>2015</year>
          .
          <volume>7346624</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Zaliskyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Odarchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gnatyuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Petrova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chaplits</surname>
          </string-name>
          ,
          <article-title>Method of traffic monitoring for DDoS attacks detection in e-health systems and networks</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          <volume>2255</volume>
          (
          <year>2018</year>
          )
          <fpage>193</fpage>
          -
          <lpage>204</lpage>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2255</volume>
          /paper18.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>V.</given-names>
            <surname>Kharchenko</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Chyrka</surname>
          </string-name>
          ,
          <article-title>Detection of airplanes on the ground using YOLO neural network</article-title>
          ,
          <source>in: Proceedings of 17th International Conference on Mathematical Methods in Electromagnetic Theory (MMET)</source>
          , IEEE, Kyiv, Ukraine,
          <year>2018</year>
          , pp.
          <fpage>294</fpage>
          -
          <lpage>297</lpage>
          . doi:
          <volume>10</volume>
          .1109/MMET.
          <year>2018</year>
          .
          <volume>8460392</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Averyanova</surname>
          </string-name>
          , et al.,
          <article-title>UAS cyber security hazards analysis and approach to qualitative assessment</article-title>
          , In: S. Shukla,
          <string-name>
            <given-names>A.</given-names>
            <surname>Unal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Varghese</given-names>
            <surname>Kureethara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.K.</given-names>
            <surname>Mishra</surname>
          </string-name>
          , D.S. Han (Eds.),
          <article-title>Data science and security</article-title>
          , volume
          <volume>290</volume>
          <source>of Lecture Notes in Networks and Systems</source>
          , Springer, Singapore,
          <year>2021</year>
          , pp.
          <fpage>258</fpage>
          -
          <lpage>265</lpage>
          . doi:
          <volume>10</volume>
          .1007/
          <fpage>978</fpage>
          -981-16-4486-3_
          <fpage>28</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>O.</given-names>
            <surname>Tachinina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Lysenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Alekseeva</surname>
          </string-name>
          , V. Novikov,
          <article-title>Mathematical modeling of motion of iron bird target node of security data management system sensors</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          <volume>2711</volume>
          (
          <year>2020</year>
          )
          <fpage>482</fpage>
          -
          <lpage>491</lpage>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2711</volume>
          /paper37.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>O. I.</given-names>
            <surname>Lysenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. V.</given-names>
            <surname>Valuiskyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O. M.</given-names>
            <surname>Tachinina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. L.</given-names>
            <surname>Danylyuk</surname>
          </string-name>
          ,
          <article-title>A method of control by telecommunication airsystems for wireless AD HOC networks optimization</article-title>
          ,
          <source>in: Proceedings of 2015 IEEE International Conference Actual Problems of Unmanned Aerial Vehicles Developments (APUAVD)</source>
          , IEEE, Kyiv, UKraine,
          <year>2015</year>
          , pp.
          <fpage>182</fpage>
          -
          <lpage>185</lpage>
          . doi:
          <volume>10</volume>
          .1109/APUAVD.
          <year>2015</year>
          .
          <volume>7346594</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Zaliskyi</surname>
          </string-name>
          , et al.,
          <article-title>Heteroskedasticity analysis during operational data processing of radio electronic systems</article-title>
          , in: S. Shukla,
          <string-name>
            <given-names>A.</given-names>
            <surname>Unal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Varghese</given-names>
            <surname>Kureethara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.K.</given-names>
            <surname>Mishra</surname>
          </string-name>
          , D.S. Han (Eds.),
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>