<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Smart Crossover Mechanism for Parallel Neuroevolution Method of Medical Diagnostic Models Synthesis</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Dept. of Computer Systems and networks, National University "Zaporizhzhia Polytechnic"</institution>
          ,
          <addr-line>69063 Zaporizhzhia</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <fpage>0000</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>Information technologies significantly expand the capabilities of modern medicine. Using of artificial neural networks is becoming particularly promising. They are actively used for diagnostics based on patient data. However, the problem of network synthesis with a satisfactory topology and accurate diagnosis remains important. Neuroevolution methods allow to solve this problem without much involvement of an expert. Moreover, these methods make it possible to effectively use the parallel power of modern computer systems. On the other hand, parallelization raises a number of new problems. The paper suggests mechanisms for improving the crossover operator. The proposed solution allows you to reduce resource consumption and improve the synthesis process.</p>
      </abstract>
      <kwd-group>
        <kwd>Medical Diagnosis</kwd>
        <kwd>Forecasting</kwd>
        <kwd>Neuroevolution</kwd>
        <kwd>Synthesis</kwd>
        <kwd>Adaptive Mechanism</kwd>
        <kwd>Genetic Algorithm</kwd>
        <kwd>Parallel Genetic Algorithm</kwd>
        <kwd>Crossover</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        The human factor often causes a number of problems. This also applies, of course, to
medicine. A doctor's mistake can mean the loss of a patient's health or even their life,
and doctors make mistakes not infrequently. Even the highest-level professional can
make mistakes, because the specialist can be tired, irritated, concentrating on the
problem worse than usual [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1-3</xref>
        ].
      </p>
      <p>
        In this case, information technologies can come to the rescue. For example, the
IBM Watson cognitive system [
        <xref ref-type="bibr" rid="ref4 ref5 ref6 ref7">4-7</xref>
        ] copes with work in the medical field at a fairly
high level (oncology, reading x-rays, etc.) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. But there are other solutions
proposed by independent researchers. A number of scientists regularly present successful
results of using artificial intelligence systems in medical diagnostics [
        <xref ref-type="bibr" rid="ref10 ref11 ref8 ref9">8-11</xref>
        ].
      </p>
      <p>
        Artificial neural networks (ANN) have come into practice wherever it is necessary
to solve problems of forecasting, classification or management. Problems of medical
diagnostics are a sub-division of problems classification and forecasting of the
object's condition. The following reasons determine the impressive success of the use of
ANNs [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13">10-13</xref>
        ]:
 a huge number of opportunities [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. ANNs are a powerful modeling method
that allows reproducing extremely complex dependencies. In particular, neural
networks naturally are nonlinear. In problems where linear application is
unsatisfactory (most medical diagnostics problems), linear models do not work well. In
addition, neural networks cope with modeling linear dependencies in the case of a
large number of variables;
 lеraining on examples (data about the object) [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Neural networks are trained
using examples. Initially, representative data is selected, and then a training
algorithm is run that automatically perceives the data structure.
      </p>
      <p>
        On the other hand, a user who trains a neural network needs a certain set of
knowledge about how to select and prepare data, select the appropriate network architecture,
and interpret the results [
        <xref ref-type="bibr" rid="ref14 ref15 ref16 ref17 ref18 ref19 ref20">14-20</xref>
        ]. Therefore, it can be assumed that the probability of
error passes from the expert in knowledge of the problem area to the expert in the
design of the ANN [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
      </p>
      <p>
        Neuroevolution methods of ANN synthesis are based on an evolutionary approach
[
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. These methods can simultaneously configure the network structure and weights.
This allows getting ready-made neural network solutions with only data from the
training sample, and in most cases does not require the user to have deep knowledge
of the ANN theory [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ].
      </p>
      <p>
        Advantages of using neuroevolution methods [
        <xref ref-type="bibr" rid="ref24 ref25 ref26 ref27">24-27</xref>
        ]:
─ a wide variety of resulting topologies – possible solutions with" non-standard "
      </p>
      <p>ANNs structures;
─ adaptivity;
─ universality;
─ the use does not require a deep knowledge about the ANN;
─ possibility of using a parallel approach.</p>
      <p>The most well-established neuroevolution method is the genetic algorithm (GA).</p>
      <p>
        Unlike other optimization technologies, GA contain a population of trial solutions
that are competitively managed using defined operators [
        <xref ref-type="bibr" rid="ref28 ref29 ref30">28-30</xref>
        ]. GA is inherent in
iterative training of a population of individuals. The capacity of GA increases with the
use of distributed computing. Such algorithms are called parallel genetic algorithms.
They are based on splitting a population into several separate subpopulations, each of
which will be processed by GA independently of other subpopulations.
      </p>
      <p>
        However, when it comes to parallelizing, it should be taken into account that the
main disadvantage of GA is a constant desire for a population containing one local
optimum, which leads to a constant decrease in the genetic diversity of the population,
which reduces the ability of GA to search for a global optimum and/or adapt to
changing parameters of target functions [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ], [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ]. Therefore, the task of improving the
methods of evolutionary optimization based on increasing the adaptive characteristics
of the method is urgent.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Literature review</title>
      <p>As noted above, GA is one of the most acceptable ways to synthesize ANN. This is
due to the fact that at the initial stage there is absolutely no information about the
direction of movement in terms of setting the matrix weights. Under conditions of
uncertainty, evolutionary methods, including GA, have the highest chances to achieve
the necessary results.</p>
      <p>
        At the same time, there is a significant disadvantage of using genetic algorithms-the
multi – iterative nature of the methods, serious time costs. To solve such problems,
parallelization can be used to branch out the execution of calculations between the
cores of modern computer systems. However, when designing a parallel approach,
you should consider the most common problems encountered:
 selecting or developing a strategy for interaction between the components of the
algorithm;
 choosing the frequency of migration between populations;
 determination of migrating individuals and their number;
 determining the structure of the evolution of individual populations.
Consider the problems in more detail. The structure of a parallel system is an
important factor in the performance of a parallel algorithm, since it determines how quickly
(or how slowly) a good solution spreads to other populations. If the system is strongly
connected, then good solutions will quickly spread to all streams and can quickly
"saturate" the population [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ]. On the other hand, if the network is loosely connected,
solutions will spread more slowly and threads will be more isolated from each other.
Further parallel development and recombination of different solutions can occur to
obtain potentially better solutions.
      </p>
      <p>
        A common trend in promising parallel genetic algorithms (PGA) [
        <xref ref-type="bibr" rid="ref34">34</xref>
        ], [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ] is the
using of static system structures that are defined before the algorithm is run and
remain unchanged.
      </p>
      <p>
        Another method of constructing a structure is to create a dynamic system [
        <xref ref-type="bibr" rid="ref33 ref34 ref35">33-35</xref>
        ].
In this case, the flow is not limited to links with a certain fixed number of threads;
instead, migrants are directed to threads that meet a certain criterion. As a similar
criterion, the degree of population diversity or the measure of genotypic distance
between two populations (or the distance from a characteristic individual of the
population, for example, a favorite) is taken. This structure requires mechanisms for tracking
events in neighboring populations, and if an event occurred in one of the neighboring
populations, then an event should be expected in the second population.
      </p>
      <p>
        The frequency of migrations also has a big impact on the final decision [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ], [
        <xref ref-type="bibr" rid="ref34">34</xref>
        ].
As it known, too frequent migrations lead to degeneration of populations, and rare
ones, on the contrary, to a decrease in convergence. Various methods are used to
regulate the frequency of migration, which can be divided into two types: adaptive
and event-based. In the first case, adaptation methods are used to adjust the migration
frequency during the algorithm operation. In the second case, methods are used that
determine the need for migration, that is, migration is performed only when an event
occurs.
      </p>
      <p>
        Selection mechanisms are used to select individuals for migration [
        <xref ref-type="bibr" rid="ref36 ref37 ref38 ref39 ref40">36-40</xref>
        ]. It is
known that individual chromosomes may contain "good" fragments of the genetic
code, but these parts may be in chromosomes that are poorly adapted. But at the same
time, excluding similar ones can lead to premature convergence, or skipping the
global optimum.
      </p>
      <p>
        Using different strategies imposes the main restriction: the necessity of forming the
same type of chromosome structure. But the effect that is possible with successful
formation can be much greater than when using a single GA structure in all
populations [
        <xref ref-type="bibr" rid="ref37">37</xref>
        ].
      </p>
      <p>
        It is also worth noting that a large number of migrations and even dynamic
exchange of intermediate information between threads requires additional overhead,
which sometimes largely negates the achievement of parallel execution of
calculations [
        <xref ref-type="bibr" rid="ref36">36</xref>
        ], [
        <xref ref-type="bibr" rid="ref38">38</xref>
        ].
      </p>
      <p>
        Moreover, we should not forget that even the sequential version of GA imposes
significant resource requirements (especially RAM), compared to gradient methods
[
        <xref ref-type="bibr" rid="ref39">39</xref>
        ], [
        <xref ref-type="bibr" rid="ref40">40</xref>
        ]. This is because in most cases a population of neural networks is used.
Hence, when parallelizing, keep in mind that the requirements will increase according
to the number of threads involved.
      </p>
      <p>Therefore, the use of additional mechanisms and the introduction of hybrid
methods, at different stages of operation, will improve the performance of the PGA.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Parallel genetic algorithm with smart crossover</title>
      <p>The paper proposes a parallel genetic method. Parallelization will occur in the
following way. Initially, we will generate a population of individuals on the main core of the
system, where each individual is a separate neural network:</p>
      <p>P  NN1, NN2 ,..., NNn ,
(1)
where, P is population,
NNn is neural network,
n is the size of population.</p>
      <p>Also, at the initial stage, the main free parameters of the method will be set: the
stop criteria, population size, and so on.</p>
      <p>The next step is to divide the population into subpopulations and distribute
subpopulations between the cores of the multi-core system.</p>
      <p>The cores will perform the same sequence of actions with subpopulations:
evaluating the genetic information of individuals, sorting and selecting the best individuals,
crossing and selecting the new best individuals.</p>
      <p>Then the selected best individuals are sent to the main core, where they are
resorted and selected the best representatives. The new best individual is evaluated for
the stop criterion. If the resulting individual is satisfied, it becomes a solution and the
method ends. If the solution is not satisfactory, the best individuals from the
subpopulations are crossed and a new population is obtained, which is re-distributed between
the system cores.</p>
      <p>Diversity is maintained in the process of evolution by the fact that each species
(subpopulation) develops without exchanging genetic material with other species.
This is an important aspect of this model. The exchange of genetic material between
two different species usually produces non-viable offspring. In addition, mixing of
genetic material can reduce the diversity of populations. This approach will also
significantly reduce the share of overhead associated with the transfer of information
between the system cores.</p>
      <p>Now let's look at the mechanisms that allow optimizing the use of RAM, while
maintaining the adaptive characteristics of the method. In the proposed method, it is
recommended to use a smart crossover, which is based on uniform crossing and rank
selection enhanced by criteria conditions.</p>
      <p>
        Uniform crossover is one of the most effective recombination operators in standard
GA [
        <xref ref-type="bibr" rid="ref41">41</xref>
        ], [
        <xref ref-type="bibr" rid="ref42">42</xref>
        ].
      </p>
      <p>
        Uniform crossover is performed according to a randomly selected standard that
specifies which genes should be inherited from the first parent (other genes are taken
from the second parent) [
        <xref ref-type="bibr" rid="ref42">42</xref>
        ]. In other words the general rule of uniform crossover can
be represented as follows:
      </p>
      <p>
        Ind3  Crossover(Ind1 , Ind 2 , DataofCros);
g Ind3  {g1  Randg1Ind1 , g1Ind2 ,
g 2  Randg 2Ind1 , g 2Ind2 ,...,
g i  RandgiInd1 , g iInd2 }.
(2)
It has long been known that setting the probability of passing a parent gene to a
descendant in a uniform crossover can significantly increase its efficiency [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ], [
        <xref ref-type="bibr" rid="ref36">36</xref>
        ], and
also allows emulating other crossing operators (single-point, two-point). It is also
known that the use of the uniform crossing operator makes it possible to apply the
socalled multi-parent recombination, when more than two parents are used to generate
one offspring. Despite this, in most studies, only two parents are used and the fixed
probability of transmitting the gene is 0.5 [
        <xref ref-type="bibr" rid="ref41">41</xref>
        ].
      </p>
      <p>Even crossing gives greater flexibility when combining rows, which is an
important advantage when working with GA.</p>
      <p>However, it should be noted that even crossing requires additional computing
power. On the other hand, uniform crossover makes it possible to emulate the operation of
simpler types of crossing, such as two-point crossover. Therefore, we will use a
twopoint crossover to work on threads. This approach will allow you to implement this
method in the future on computing systems running graphics processors (GPUs).</p>
      <p>It is proposed to strengthen the ranking selection by introducing additional criteria
that will help to track various characteristics of neural networks more subtly, namely:
excessive memory usage and approximation properties of the neural network.</p>
      <p>The general scheme of the method is shown in Fig. 1.
4
4.1</p>
    </sec>
    <sec id="sec-4">
      <title>Experiment</title>
      <p>
        Description of the experiment
A sample of Parkinson's Disease Classification Data Set will be used for the
experiment [
        <xref ref-type="bibr" rid="ref43">43</xref>
        ].
      </p>
      <p>The data used in this study were gathered from 188 patients with PD (107 men and
81 women) with ages ranging from 33 to 87 (65.1Â±10.9). The data used in this study
were gathered from 188 patients with PD (107 men and 81 women) with ages ranging
from 33 to 87 (65.1Â±10.9) at the Department of Neurology in Cerrahpasa Faculty of
Medicine, Istanbul University.</p>
      <p>The control group consists of 64 healthy individuals (23 men and 41 women) with
ages varying between 41 and 82 (61.1Â±8.9). During the data collection process, the
microphone is set to 44.1 KHz and following the physician’s examination, the
sustained phonation of the vowel was collected from each subject with three repetitions.
Table 1 shows the main characteristics of the data sample.</p>
      <p>The following hardware and software have been used for experimental verification
of the proposed parallel genetic method for ANN synthesis: the computing system of
the Department of software tools of Zaporizhzhya national technical university
(National university “Zaporizhzhia politechnic”), Zaporizhzhia: Xeon processor E5-2660
v4 (14 cores), RAM 4x16 GB DDR4, the programming model of Java threads.</p>
      <p>
        The results of the proposed method will be compared with the results of the
method considered in the previous works [
        <xref ref-type="bibr" rid="ref44">44</xref>
        ]. Old modification will be called PGM
(Parallel genetic method) and new variant – PGM SC (Parallel genetic method with smart
crossover). Note that when working, the size of the parent pool for even crossing will
be equal to the number of system cores involved.
In the Fig. 2 is graph of the execution time (in minutes) of the proposed method on
computer systems, which depends on the number of cores involved.
It can be seen from the graphs that the proposed method has an acceptable degree of
parallelism and is effectively performed on MIMD parallel system. In addition, the
processor in multi-core computer supports Turbo Boost technology [
        <xref ref-type="bibr" rid="ref45 ref46 ref47">45–47</xref>
        ], making
the time of the method execution on the single core much less than on the core of
which does not support this technology.
      </p>
      <p>Fig. 3 shows graphs of changes in communication overhead. Since the new method
offers the transfer of only the best individuals, this allows you to significantly reduce
the transfer of excess information. This is explained by the fact that communication
overhead of the proposed method execution on computer systems is relatively small,
and the number of parallel operations significantly exceeds the number of serial
operations and synchronizations. In communication overhead, is understood the ratio of
the time spent by the system for transfers and synchronization among cores to the
time of target calculations on a given number of cores.</p>
      <p>Fig. 2. Dependence the execution time of the proposed method to the number of involved cores
of the computing system
Fig. 3. Communication overhead performing the proposed method to the number of cores
involved of the computing system
Fig. 4 shows graphs of speedup changes. Based on the fact that the communication
overhead has decreased, the speedup of execution increases significantly.</p>
      <p>The graph of efficiency of computer systems is presented in Fig. 5. It shows that
the using of even 14 cores of computer systems for the implementation of the
proposed method retains the efficiency at a relatively acceptable level and indicates the
potential, if necessary and possibly, to use even more cores.
Based on the results of experiments, it can be argued that the proposed method can be
used in the synthesis of neural network diagnostic models. The proposed smart
crossover mechanism significantly optimizes the synthesis process by using an adjustable
uniform crossover and additional criteria at the selection stage.</p>
      <p>However, we are not talking about large-scale implementation of neural networks
in hospitals around the world. The main problem in terms of the spread of these
technologies is that neural networks are a kind of "black box". Specialists enter data and
get a certain result. But the creators of such systems may not fully understand how
this result was obtained, what algorithms and in what sequence are involved. If neural
networks could be made more transparent, and the principle of their operation could
be easily explained to medical practitioners, then the rate of spread of this technology
would be much higher.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgment</title>
      <p>The work is supported by the state budget scientific research project of National
University "Zaporizhzhia Polytechnic" “Intelligent methods and software for
diagnostics and non-destructive quality control of military and civilian applications” (state
registration number 0119U100360).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Einarson</surname>
            ,
            <given-names>T.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Acs</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ludwig</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Panton</surname>
          </string-name>
          , U.H.:
          <article-title>Prevalence of cardiovascular disease in type 2 diabetes: a systematic literature review of scientific evidence from across the world in 2007-2017</article-title>
          .
          <source>Cardiovasc Diabetol</source>
          <volume>17</volume>
          (
          <issue>1</issue>
          ),
          <fpage>1</fpage>
          -
          <lpage>19</lpage>
          (
          <year>2018</year>
          ). https://doi.org/10.1186/s12933- 018-0728-6
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Petrie</surname>
            ,
            <given-names>J.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Guzik</surname>
            ,
            <given-names>T.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Touyz</surname>
            ,
            <given-names>R.M.</given-names>
          </string-name>
          :
          <string-name>
            <surname>Diabetes</surname>
          </string-name>
          ,
          <string-name>
            <surname>Hypertension</surname>
          </string-name>
          , and
          <article-title>Cardiovascular Disease: Clinical Insights and Vascular Mechanisms</article-title>
          .
          <source>The Canadian journal of cardiology 34(5)</source>
          ,
          <fpage>575</fpage>
          -
          <lpage>584</lpage>
          (
          <year>2017</year>
          ). https://doi.org/10.1016/j.cjca.
          <year>2017</year>
          .
          <volume>12</volume>
          .005
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Jungen</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scherschel</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eickholt</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kuklik</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Klatt</surname>
          </string-name>
          , N.:
          <article-title>Disruption of cardiac cholinergic neurons enhances susceptibility to ventricular arrhythmias</article-title>
          .
          <source>Nature Communications</source>
          <volume>8</volume>
          (
          <year>2017</year>
          ). https://doi.org/10.1038/ncomms14155
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Bakkar</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kovalik</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lorenzini</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          et al.:
          <article-title>Artificial intelligence in neurodegenerative disease research: use of IBM Watson to identify additional RNA-binding proteins altered in amyotrophic lateral sclerosis</article-title>
          .
          <source>Acta Neuropathol</source>
          <volume>135</volume>
          ,
          <fpage>227</fpage>
          -
          <lpage>247</lpage>
          (
          <year>2018</year>
          ). https://doi.org/10.1007/s00401-017-1785-8
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Gomes</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dietterich</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barrett</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          et al.:
          <article-title>Computational sustainability: computing for a better world and a sustainable future</article-title>
          .
          <source>Communications of The ACM</source>
          <volume>62</volume>
          (
          <issue>9</issue>
          ),
          <fpage>56</fpage>
          -
          <lpage>65</lpage>
          (
          <year>2019</year>
          ). https://doi.org/10.1145/3339399
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Kolpakova</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oliinyk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lovkin</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Improved method of group decision making in expert systems based on competitive agents selection</article-title>
          .
          <source>UKRCON: IEEE First Ukraine Conference on Electrical and Computer Engineering (UKRCON)</source>
          ,
          <fpage>939</fpage>
          -
          <lpage>943</lpage>
          . KPI,
          <string-name>
            <surname>Kiyv</surname>
          </string-name>
          (
          <year>2017</year>
          ). https://doi.org/10.1109/UKRCON.
          <year>2017</year>
          .8100388
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>7. IBM Watson Homepage, https://www.ibm.com/watson</mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Pranav</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Anirudh</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Anuj</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          et al.:
          <article-title>CheXpedition: Investigating Generalization Challenges for Translation of Chest X-Ray Algorithms to the Clinical Setting (</article-title>
          <year>2020</year>
          ), https://arxiv.org/pdf/
          <year>2002</year>
          .11379.pdf
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <article-title>Artificial intelligence rivals radiologists in screening X-rays for certain diseases</article-title>
          , https://med.stanford.edu/news/all-news/
          <year>2018</year>
          /11/ai-outperformed
          <article-title>-radiologists-inscreening-x-rays-for-certain-diseases</article-title>
          .html
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <article-title>Stanford researchers develop artificial intelligence tool to help detect brain aneurysms</article-title>
          , https://news.stanford.edu/
          <year>2019</year>
          /06/07/ai-tool
          <article-title>-helps-radiologists-detect-brain-aneurysms/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Leoshchenko</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oliinyk</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Subbotin</surname>
            ,
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zaiko</surname>
            <given-names>T</given-names>
          </string-name>
          :
          <article-title>Using modern architectures of recurrent neural networks for technical diagnosis of complex systems</article-title>
          .
          <source>In: Proceedings of the 2018 international scientific-practical conference problems of infocommunications. science and technology (PIC S&amp;T)</source>
          ,
          <volume>411</volume>
          -
          <fpage>416</fpage>
          . IEEE,
          <string-name>
            <surname>Kharkov</surname>
          </string-name>
          (
          <year>2018</year>
          ). https://doi.org/10.1109/infocommst.
          <year>2018</year>
          .8632015
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Schmidt</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marques</surname>
            ,
            <given-names>M.R.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Botti</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          et al.:
          <article-title>Recent advances and applications of machine learning in solid-state materials science</article-title>
          .
          <source>npj Comput Mater</source>
          <volume>5</volume>
          ,
          <issue>83</issue>
          (
          <year>2019</year>
          ). https://doi.org/10.1038/s41524-019-0221-0
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Bradley</surname>
            ,
            <given-names>J.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Holan</surname>
            ,
            <given-names>S.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wikle</surname>
            ,
            <given-names>C.K.</given-names>
          </string-name>
          :
          <article-title>Multivariate spatio-temporal models for highdimensional areal data with application to Longitudinal Employer-Household Dynamics</article-title>
          .
          <source>The Annals of Applied Statistics</source>
          ,
          <volume>9</volume>
          (
          <issue>4</issue>
          ),
          <fpage>1761</fpage>
          -
          <lpage>1791</lpage>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <article-title>A Tour of Machine Learning Algorithms</article-title>
          , https://machinelearningmastery.com
          <article-title>/a-tour-ofmachine-learning-algorithms/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Sabater-Mir</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Torra</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aguiló</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>González-Hidalgo</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <source>Artificial Intelligence Research</source>
          and Development. IOS Press, Amsterdam (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Scher</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Messori</surname>
          </string-name>
          , G.:
          <article-title>Generalization properties of feed-forward neural networks trained on Lorenz systems</article-title>
          .
          <source>Nonlinear Processes in Geophysics 26</source>
          ,
          <fpage>381</fpage>
          -
          <lpage>399</lpage>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <article-title>How to Scale Data for Long Short-Term Memory Networks in Python</article-title>
          , https://machinelearningmastery.com/how-to
          <article-title>-scale-data-for-long-short-term-memorynetworks-in-python/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <article-title>Learning process of a neural network, https://towardsdatascience.com/how-do-artificialneural-networks-learn-773e46399fc7</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Oliinyk</surname>
            ,
            <given-names>A.O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zayko</surname>
            ,
            <given-names>T.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Subbotin</surname>
            ,
            <given-names>S.O.</given-names>
          </string-name>
          :
          <article-title>Synthesis of Neuro-Fuzzy Networks on the Basis of Association Rules</article-title>
          .
          <source>Cybern Syst Anal</source>
          <volume>50</volume>
          ,
          <fpage>348</fpage>
          -
          <lpage>357</lpage>
          (
          <year>2014</year>
          ). https://doi.org/10.1007/s10559-014-9623-7
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Oliinyk</surname>
            ,
            <given-names>A.O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skrupsky</surname>
            ,
            <given-names>S.Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Subbotin</surname>
            ,
            <given-names>S.A.</given-names>
          </string-name>
          :
          <article-title>Using parallel random search to train fuzzy neural networks</article-title>
          .
          <source>Aut. Control Comp. Sci. 48</source>
          ,
          <fpage>313</fpage>
          -
          <lpage>323</lpage>
          (
          <year>2014</year>
          ). https://doi.org/10.3103/S0146411614060078
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Stanley</surname>
            ,
            <given-names>K.O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Clune</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lehman</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          et al.:
          <article-title>Designing neural networks through neuroevolution</article-title>
          .
          <source>Nature Machine Intelligence</source>
          <volume>1</volume>
          ,
          <fpage>24</fpage>
          -
          <lpage>35</lpage>
          (
          <year>2019</year>
          ). https://doi.org/10.1038/s42256-018- 0006-z.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Baldominos</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Saez</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Isasi</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>On the automated, evolutionary design of neural networks: past, present, and future</article-title>
          .
          <source>Neural Comput &amp; Applic</source>
          <volume>32</volume>
          ,
          <fpage>519</fpage>
          -
          <lpage>545</lpage>
          (
          <year>2020</year>
          ). https://doi.org/10.1007/s00521-019-04160-6
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Gaier</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Asteroth</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mouret</surname>
          </string-name>
          , J.-B.:
          <article-title>Data-efficient Neuroevolution with Kernel-Based Surrogate Models</article-title>
          .
          <source>GECCO '18: Proceedings of the Genetic and Evolutionary Computation Conference</source>
          ,
          <volume>85</volume>
          -
          <fpage>92</fpage>
          (
          <year>2018</year>
          ).
          <volume>10</volume>
          .1145/3205455.3205510.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Albrigtsen</surname>
            ,
            <given-names>S.I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Imenes</surname>
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Neuroevolution of Actively Controlled Virtual Characters</article-title>
          . University of Agder, Kristiansand &amp;
          <string-name>
            <surname>Grimstad</surname>
          </string-name>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Bohrer</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grisci</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dorn</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Neuroevolution of Neural Network Architectures Using CoDeepNEAT and Keras (</article-title>
          <year>2020</year>
          ), https://arxiv.org/abs/
          <year>2002</year>
          .04634
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Phillips</surname>
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Deep Neuroevolution: Genetic Algorithms are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning</article-title>
          - Paper Summary, https://towardsdatascience.com
          <article-title>/deep-neuroevolution-genetic-algorithms-are-acompetitive-alternative-for-training-deep-neural-822bfe3291f5</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <article-title>Gradient descent vs</article-title>
          . neuroevolution, https://towardsdatascience.com
          <article-title>/gradient-descent-vsneuroevolution-f907dace010f</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>García-Martínez</surname>
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rodriguez</surname>
            <given-names>F.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lozano</surname>
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2018</year>
          )
          <article-title>Genetic Algorithms</article-title>
          . In: Martí R.,
          <string-name>
            <surname>Pardalos</surname>
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Resende</surname>
            <given-names>M</given-names>
          </string-name>
          . (eds) Handbook of Heuristics. Springer, Cham
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>W.-P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hsiao</surname>
          </string-name>
          , Y.-T.,
          <string-name>
            <surname>Hwang</surname>
          </string-name>
          , W.-C.:
          <article-title>Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment</article-title>
          .
          <source>BMC Systems Biology</source>
          <volume>8</volume>
          (
          <issue>1</issue>
          ),
          <fpage>23</fpage>
          -
          <lpage>28</lpage>
          (
          <year>2014</year>
          ). https://doi.org/10.1186/
          <fpage>1752</fpage>
          -0509-8-5.
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Gonçalves</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gomes</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Henggeler</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Optimizing residential energy resources with an improved multi-objective genetic algorithm based on greedy mutations</article-title>
          .
          <source>GECCO '18: Proceedings of the Genetic and Evolutionary Computation Conference</source>
          ,
          <volume>1246</volume>
          -
          <fpage>1253</fpage>
          (
          <year>2018</year>
          ). https://doi.org/10.1145/3205455.3205616
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mei</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>A two-stage genetic programming hyper-heuristic approach with feature selection for dynamic flexible job shop scheduling</article-title>
          .
          <source>GECCO '19: Proceedings of the Genetic and Evolutionary Computation Conference</source>
          ,
          <volume>347</volume>
          -
          <fpage>355</fpage>
          (
          <year>2019</year>
          ). https://doi.org/10.1145/3321707.3321790
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          32.
          <string-name>
            <surname>Stork</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eiben</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bartz-Beielstein</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>A new Taxonomy of Continuous Global Optimization Algorithms (</article-title>
          <year>2018</year>
          ), https://arxiv.org/pdf/
          <year>1808</year>
          .08818.pdf
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          33.
          <string-name>
            <surname>Potuzak</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Optimization of a genetic algorithm for road traffic network division using a distributed/parallel genetic algorithm</article-title>
          .
          <source>2016 9th International Conference on Human System Interactions (HSI)</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>21</fpage>
          -
          <lpage>27</lpage>
          . https://doi.org/10.1109/HSI.
          <year>2016</year>
          .7529603
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          34.
          <string-name>
            <given-names>Hoseini</given-names>
            <surname>Alinodehi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.P.</given-names>
            ,
            <surname>Moshfe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Saber</surname>
          </string-name>
          <string-name>
            <surname>Zaeimian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            , et al.:
            <surname>High-Speed General Purpose Genetic Algorithm</surname>
          </string-name>
          <article-title>Processor</article-title>
          .
          <source>IEEE Transactions on Cybernetics</source>
          <volume>46</volume>
          (
          <issue>7</issue>
          ),
          <fpage>1551</fpage>
          -
          <lpage>1565</lpage>
          , (
          <year>2016</year>
          ). https://doi.org/10.1109/TCYB.
          <year>2015</year>
          .2451595
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          35.
          <string-name>
            <surname>Hou</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>He</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yan</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          :
          <article-title>A Parallel Genetic Algorithm With Dispersion Correction for HW/SW Partitioning on Multi-Core CPU and Many-Core GPU</article-title>
          .
          <source>IEEE Access 6</source>
          ,
          <fpage>883</fpage>
          -
          <lpage>898</lpage>
          (
          <year>2018</year>
          ). https://doi.org/10.1109/ACCESS.
          <year>2017</year>
          .2776295
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          36.
          <string-name>
            <given-names>Varun</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.G.</given-names>
            ,
            <surname>Panneerselvam</surname>
          </string-name>
          , Dr.R.:
          <article-title>A Study of Crossover Operators for Genetic Algorithms to Solve VRP and its Variants and New Sinusoidal Motion Crossover Operator</article-title>
          .
          <source>International Journal of Computational Intelligence Research</source>
          <volume>13</volume>
          (
          <issue>7</issue>
          ),
          <fpage>1717</fpage>
          -
          <lpage>1733</lpage>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          37.
          <string-name>
            <surname>Hassanat</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alkafaween</surname>
          </string-name>
          , E.:
          <article-title>On Enhancing Genetic Algorithms Using New Crossovers</article-title>
          , https://arxiv.org/ftp/arxiv/papers/1801/
          <year>1801</year>
          .02335.pdf
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          38.
          <string-name>
            <surname>Umbarkar</surname>
            ,
            <given-names>Dr.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sheth</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Crossover operators in genetic algorithms: a review</article-title>
          .
          <source>ICTACT Journal on Soft Computing</source>
          <volume>6</volume>
          (
          <issue>1</issue>
          ),
          <fpage>1083</fpage>
          -
          <lpage>1092</lpage>
          (
          <year>2015</year>
          ). https://doi.org/6. 10.21917/ijsc.
          <year>2015</year>
          .
          <volume>0150</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          39.
          <string-name>
            <surname>Oliinyk</surname>
            ,
            <given-names>A.O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skrupsky</surname>
            ,
            <given-names>S.Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Subbotin</surname>
            ,
            <given-names>S.A.</given-names>
          </string-name>
          :
          <article-title>Experimental investigation with analyzing the training method complexity of neuro-fuzzy networks based on parallel random search</article-title>
          .
          <source>Aut. Control Comp. Sci. 49</source>
          ,
          <fpage>11</fpage>
          -
          <lpage>20</lpage>
          (
          <year>2015</year>
          ). https://doi.org/10.3103/S0146411615010071
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          40.
          <string-name>
            <surname>Leoshchenko</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oliinyk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skrupsky</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Subbotin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zaiko</surname>
            <given-names>T.</given-names>
          </string-name>
          :
          <source>Parallel Method of Neural Network Synthesis Based on a Modified Genetic Algorithm Application. Workshop Proceedings of the 8th International Conference on “Mathematics. Information Technologies</source>
          . Education”,
          <source>MoMLeT&amp;DS-2019</source>
          ,
          <fpage>11</fpage>
          -
          <lpage>23</lpage>
          . Lviv Polytechnic National University, Lviv (
          <year>2019</year>
          ). https://dblp.org/rec/conf/momlet/LeoshchenkoOSSZ19
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          41.
          <string-name>
            <surname>Baeck</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fogel</surname>
            ,
            <given-names>D.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Michalewicz</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          :
          <article-title>Evolutionary Computation 1: Basic Algorithms and Operators</article-title>
          . CRC Press, Boca Raton (
          <year>2000</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          42.
          <string-name>
            <surname>Das</surname>
            ,
            <given-names>A.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pratihar</surname>
            ,
            <given-names>D.K.</given-names>
          </string-name>
          <article-title>A directional crossover (DX) operator for real parameter optimization using genetic algorithm</article-title>
          .
          <source>Appl Intell</source>
          <volume>49</volume>
          ,
          <fpage>1841</fpage>
          -
          <lpage>1865</lpage>
          (
          <year>2019</year>
          ). https://doi.org/10.1007/s10489-018-1364-2
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          43.
          <string-name>
            <surname>Parkinson</surname>
          </string-name>
          <article-title>'s Disease Classification Data Set</article-title>
          , https://archive.ics.uci.edu/ml/datasets/ Parkinson%27s+Disease+Classification
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          44.
          <string-name>
            <surname>Leoshchenko</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oliinyk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skrupsky</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Subbotin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lytvyn</surname>
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Parallel Genetic Method for the Synthesis of Recurrent Neural Networks for Using in Medicine</article-title>
          .
          <source>In: Proceedings of the Second International Workshop on Computer Modeling and Intelligent Systems (CMIS-2019)</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>17</lpage>
          (
          <year>2019</year>
          ). https://dblp.org/rec/conf/cmis/LeoshchenkoOSSL19
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          45.
          <source>Intel Turbo Boost Technology 2.0</source>
          , https://www.intel.com/content/www/us/en/architectureand-technology/
          <article-title>turbo-boost/turbo-boost-technology</article-title>
          .html
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          46.
          <string-name>
            <surname>Oliinyk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Subbotin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lovkin</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leoshchenko</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zaiko</surname>
          </string-name>
          ,
          <source>T. Feature Selection Based on Parallel Stochastic Computing. 13th International Scientific and Technical Conference on Computer Sciences and Information Technologies (CSIT)</source>
          ,
          <fpage>347</fpage>
          -
          <lpage>351</lpage>
          . IEEE,
          <string-name>
            <surname>Lviv</surname>
          </string-name>
          (
          <year>2018</year>
          ). https://doi.org/10.1109/STC-CSIT.
          <year>2018</year>
          .
          <volume>8526729</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          47.
          <string-name>
            <surname>Shkarupylo</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skrupsky</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oliinyk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kolpakova</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Development of stratified approach to software defined networks simulation</article-title>
          .
          <source>Eastern-European Journal of Enterprise Technologies</source>
          ,
          <volume>5</volume>
          .
          <fpage>67</fpage>
          -
          <lpage>73</lpage>
          (
          <year>2017</year>
          ). https://doi.org/10.15587/
          <fpage>1729</fpage>
          -
          <lpage>4061</lpage>
          .
          <year>2017</year>
          .
          <volume>110142</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>