<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Implementation of Selective Pressure Mechanism to Optimize Memory Consumption in the Synthesis of Neuromodels for Medical Diagnostics</article-title>
      </title-group>
      <pub-date>
        <year>1800</year>
      </pub-date>
      <fpage>0000</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>The introduction of artificial neural networks in the process of medical diagnosis meets a number of difficulties. The using of general methods for synthesis and training networks is difficult because of the complexity of the simulated system, that is the human. Neuroevolution approach for the synthesis of neural network models has proven itself well, but it is also not without difficulties. The paper proposes a new mechanism for modifying the genetic algorithm in the synthesis of neuromodels that can be used in medical diagnostics. Innovations allow to reduce time of synthesis, and also to solve a number of problems at a choice of the best individuals for formation of new population.</p>
      </abstract>
      <kwd-group>
        <kwd>medical diagnosis</kwd>
        <kwd>prediction</kwd>
        <kwd>neuromodels</kwd>
        <kwd>synthesis</kwd>
        <kwd>selective pressure</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>In the arsenal of modern medicine, there are many effective means of detecting a
variety of diseases, but some of them are invasive, dangerous to the patient or difficult
to implement and high-cost. Most of these techniques can afford only
multicommercial medical centers, and therefore inaccessible to the majority of population.
Modern medicine, especially at the primary level, needs to be armed with
inexpensive, safe for the patient, effective and reliable tools for the earliest possible detection
of the most common forms of pathology. One of the ways to create such tools is the
use of artificial neural network (ANN) technologies.</p>
      <p>Neural networks are implemented according to the principles of construction and
functioning of the human brain. From it that technologies inheritance the ability to
learn and extract knowledge from statistical data, to generalize them in the form of
rules and regularities, the property of intuition. Well-designed and properly trained
ANN are able to build adequate mathematical models and use them to perform
highprecision predictions (forecasts in many areas, including medicine.</p>
      <p>ANNs are not programmed in the usual sense of the word, they are trained. The
possibility of training is one of their main advantages over traditional algorithms.
After training, ANNs become mathematical models of the subject areas under
consideration. This means that virtual experiments can be performed on them, and ANNs
will behave in exactly the same way as the subject area they are modeling.</p>
      <p>The method of mathematical modeling in its classical sense has long been fruitfully
used in many scientific fields. Today, no sufficiently complex technical object or
process is created and launched without virtual computer experiments being
performed on its mathematical model. Thanks to this, scientists and engineers know
exactly how long the object they create will live, how it will behave in complex
changing conditions, and what should be done to avoid trouble.</p>
      <p>Experts note that the method of mathematical modeling for a long time was
practically unavailable for use in the field of medical sciences due to the exceptional
complexity of the object of modeling the human. But new ANN technologies allow to
overcome this barrier and to construct mathematical models of patients and to carry
out computer experiments on them: changing a way of life virtually, trying various
courses of treatment, selecting medicines and observing on the computer screen to
what it will lead.</p>
      <p>
        Moreover, as the scientists note, cases have been repeatedly recorded when in the
process of neural network modeling new, previously unknown knowledge and
patterns were revealed and used. The results of neural network modeling-diagnoses and
forecasts, eventually found confirmation, despite the apparent paradoxical nature of
the detected patterns [
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4">1-4</xref>
        ].
      </p>
      <p>
        The facts discovered by the method of neural network mathematical modeling are
not always consistent with the established practice of giving the same
recommendations to all patients without exception: to follow a diet, abandon bad habits, limit the
use of coffee and alcohol, lose weight, limit mental and physical activity, etc [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
Virtual computer experiments have shown that these recommendations are really useful
for most, but not for all patients. To identify atypical patients for whom these
recommendations are not only useful, but also can cause harm, allows the intelligent system
of diagnosis and prediction of diseases [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>However, there are a number of difficulties in the union of ANN technology and
health protection. Decision issued using ANN must to be accompany acceptable
explanations and comments that ANN are not able to do because of their not verbality
inherited from the prototype brain. Moreover, the theory of neural networks is still
weak and only a very experienced mathematician can create a really adequate ANN
model that provides high accuracy of diagnosis and forecasting.</p>
      <p>
        To solve this problem, often turned to strategies that allow not just to train, but to
synthesize ANN in the same way as it happens in the real world these are
neuroevolutionary methods. However, it should be emphasized that in this case, the developer
instead of difficulties with the development and training of ANNs gets problems with
the use of evolutionary algorithms. In this research, authors consider the
modernization of the previously proposed modified genetic algorithm [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] using the selective
pressure mechanism to optimize memory consumption.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>The problems of synthesis of neuromodeles</title>
      <p>
        Inasmuch the choice of ANN topology is, as a rule, a complex task solved by trial and
error, the evolutionary search for a neural network structure is able to facilitate and to
some extent automate the process of solving the problem of configuring and training
ANNs. The simultaneous solution of two separate problems: setting the weights of
connections and setting the structure of ANN allows to some extent compensate for
the shortcomings inherent in each of them separately and combine their advantages.
On the other hand, the payment for this is a huge search area, as well as the
unification of a number of shortcomings caused by the use of the evolutionary approach.
Summing up, let list the advantages and disadvantages [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13 ref14 ref8 ref9">8-14</xref>
        ].
      </p>
      <p>
        Advantages [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]:
─ the ability to automatically search the topology of ANN and obtain a more accurate
      </p>
      <p>ANN model by considering the non-standard, irregular topologies;
─ independence from the structure of ANN and characteristics of activation functions
of neurons;
─ the ability to automatically search the topology of the ANN and obtain a more
accurate ANN model.</p>
      <p>
        To simplify the task and improve the quality of the results, in the process of
searching for the topology of ANN, it is possible to use additional regulatory restrictions
that help to avoid excessive complication of the network, which is expressed in a
rapid increase in the number of hidden neurons and connections between them [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        Disadvantages [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]:
─ the complexity of estimating the structure of ANN without information on the
values of the weights of the connections;
─ the complexity of the search topology ANN;
─ the complexity of fine-tuning the weights of connections in the later stages of
evolutionary search;
─ large requirements for the amount of RAM due to the use of populations of ANNs.
      </p>
      <p>
        The first drawback is the main problem of evolutionary tuning of the ANN
structure. It is mainly due to the sensitivity of training results to initial conditions and
values of training algorithm parameters [
        <xref ref-type="bibr" rid="ref19 ref20 ref21 ref22">19-22</xref>
        ].
2.1
      </p>
      <p>
        Using of selective and crossover operators during the synthesis of
neuromodels
Selection of individuals consists in the selection (for the value of the fitness function
calculated at the previous stage) of those individuals who will participate in the
breeding of children for the next population, that is, for the next generation. This choice is
where
made according to the principle of natural selection, according to which the
chromosomes with the highest values of fitness function have the greatest chances to
participate in the creation of new individuals [
        <xref ref-type="bibr" rid="ref23 ref24 ref25 ref26">23-26</xref>
        ]. There are different methods of
selection. The most popular is the so-called method of roulette wheel selection, which got
its name by analogy with the famous gambling [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]. Each chromosome can be
mapped sector roulette wheel, the value of which is set proportional to the value of
the fitness function of the chromosome. Therefore, the greater the value of the fitness
function, the larger the sector on the roulette wheel. The entire roulette wheel
corresponds to the sum of the fitness function of all individuals in the population in
question. Each individual, Ind i for i  1,2,...,n (where n is population size) corresponds
to the wheel sector vIndi , expressed as a percentage, according to the formula
vIndi   ps Indi  100% ,
ps Indi   n
 F Indi 
i1
      </p>
      <p>F Indi 
,
(1)
and F Indi  is a value of the fitness function of the individual Ind i , and ps Indi  is
a probability of selection of individuals Ind i . Selection of an individual can be
represented as the result of turning the roulette wheel, since the selected individual (that is,
the winner) refers to the sector of the wheel that fell out. Obviously, the larger the
sector, the greater the likelihood of selecting the appropriate individual. Therefore, the
probability of choosing this chromosome is proportional to the value of its fitness
function.</p>
      <p>
        The roulette wheel selection method is considered by genetic algorithms to be the
main method of selecting individuals for the parent population with a view to their
subsequent transformation by genetic operators, such as crossing and mutation [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ],
[
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]. Despite the random nature of the selection procedure, parent individuals are
selected in proportion to the values of their fitness function, that is, according to the
probability of selection, determined by the formula . Each individual gets in the parent
      </p>
      <sec id="sec-2-1">
        <title>F Indi </title>
        <p>pool is the number of copies, which is set by the expression ps Indi   n .
 F Indi 
i1
Each individual gets in the parent pool is the number of copies, which is set by the
expression
cIndi   ps Indi   n ,
(2)
where n are the number of the individuals Ind i for i  1,2,...,n in the population,
and ps Indi  is a probability of selection of an individual Ind i , what is calculated by</p>
      </sec>
      <sec id="sec-2-2">
        <title>F Indi </title>
        <p>ps Indi   n . Strictly speaking, the number of copies of a given individual
 F Indi 
i
in the parent pool is equal to an integer part of cIndi  . When using formulas (1) and
(2) it is necessary to pay attention to the fact that cIndi   F Indi  , where F is the
F
average value of the fitness function in the population. Obviously, the roulette method
can be used when the value of the fitness function is positive. This method can only
be used in function maximization problems (not minimization).</p>
        <p>
          At tournament selection all individuals of population are divided into subgroups
with the further choice in each of them of an individual with the best fitness [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ], [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ].
There are two ways to make this choice: deterministic tournament selection and
stochastic tournament selection. Deterministic choices have a probability of 1, and
random choices have a probability of:  1 . Subgroups can be of arbitrary size, but most
often the population is divided into subgroups of 2-3 individuals each.
        </p>
        <p>The tournament method is suitable for solving problems of both maximization and
minimization of the function. In addition, it can be easily extended to problems
related to multi-criteria optimization, that is, to the case of simultaneous optimization of
several functions. In the tournament method, it can be changed the size of the
subgroups into which the population is divided (tournament size). Studies confirm that
the tournament method is more effective than the roulette method.</p>
        <p>During ranking selection individuals of the population are ranked according to the
values of their fitness function. This can be thought of as a sorted list of individuals,
ordered in the direction from the most adapted to the least adapted (or vice versa), in
which each individual is assigned a number, which determines its place in the list and
is called a rank. The number of copies of each individual introduced into the parent
population is calculated by a priori given function depending on the rank of the
individual.</p>
        <p>
          The advantage of the rank method is that it can be used both to maximize and
minimize the function. It also does not require scaling due to the problem of premature
convergence relevant to the roulette method [
          <xref ref-type="bibr" rid="ref23 ref24 ref25">23-25</xref>
          ].
        </p>
        <p>The application of genetic operators to individuals selected by selection leads to
the formation of a new population of children from the parent population created at
the previous stage.</p>
        <p>
          The crossover operation consists in the exchange of fragments of chains between
two parent individuals [
          <xref ref-type="bibr" rid="ref27 ref28 ref29 ref30 ref31 ref32">27-32</xref>
          ]. A pair of parents for mating are selected from parents
pool at random so that the probability of selecting a particular individual for breeding
equal to the probability pc . For example, if two individuals from the parent
population are randomly selected as parents n , то pc  2 n .
        </p>
        <p>
          Two-point crossover, as its name implies, differs from point crossing in that
descendants inherit fragments of parent individuals determined by two randomly
selected crossing points [
          <xref ref-type="bibr" rid="ref28 ref29 ref30">28-30</xref>
          ]. For a pair of individuals crossing at points 4 and 6 is shown
in Fig. 1. Note that such crossing does not lead to the destruction of the scheme,
which is the parent individual 2.
        </p>
        <p>
          Multiple-point crossover is a generalization of previous operations and is
characterized by a correspondingly large number of crossing points [
          <xref ref-type="bibr" rid="ref30">30</xref>
          ], [
          <xref ref-type="bibr" rid="ref31">31</xref>
          ]. For example,
for three crossing points equal to 4, 6, and 9, and the same number of parents as in
Fig. 1, the crossing results are shown in Fig. 2.
        </p>
        <p>
          Uniform crossover is performed according to a randomly selected standard that
specifies which genes should be inherited from the first father (other genes are taken
from the second parent) [
          <xref ref-type="bibr" rid="ref28 ref29 ref30 ref31">28-31</xref>
          ]. That is the general rule of uniform crossing can be
represented as follows:
g2  Randg Ind1 , g Ind2 ,...,
gi  Randg Ind1 , g Ind2 }
        </p>
        <sec id="sec-2-2-1">
          <title>Crossover (Ind1, Ind2 , DataofCros)  Ind3</title>
          <p>g Ind3  {g1  Randg Ind1 , g Ind2 ,
.</p>
          <p>(3)
An example of uniform crossover is shown in Fig. 3.</p>
          <p> parent1: 0011001110 10 crossover 1011011110 10: child1
 parent 2 : 1010110110 11 0010100110 11: child 2
locus : 1 2 3 4 5 6 7 8 9 10 11 12
etalon : 0 1 0 1 1 0 1 1 1 0 1 1
The main factor of evolution is natural selection, which leads to the fact that among
genetically different individuals of the same population survive and leave offspring
only the most adapted to the environment. In genetic algorithms is also highlighted
stage of selection, which from the current population are selected and included in the
parent population of individuals who have the greatest value of a fitness function. The
next step, sometimes called evolution, involves genetic crossover and mutation
operators that recombine genes on chromosomes.</p>
          <p>However, it should be recognized that the classical genetic algorithm emulates
natural evolution is not fully, so it may exist wish to look at the mechanism of selective
pressure, which will more effectively perform crossover.</p>
          <p>We introduce selective pressure at the crossover stage by extending the selection
operation. It is established the relationship between the probability of gene
transmission to a descendant and the knowledge of the suitability of parents. To do this, we
will expand the rank selection by introducing additional criteria for evaluating
individuals.</p>
          <p>
            The first criterion will be used to assess memory redundancy. As mentioned above,
neural networks have memory that implemens as the weights connections. The less
memory ANN has, the fewer images it can remember. However, in a situation where
two ANNs with different memory provide the necessary accuracy of recognition
(evaluation), the network with less memory, of course, has the best generalizing
properties [
            <xref ref-type="bibr" rid="ref33">33</xref>
            ]. The network memory redundancy will be characterized by the redundancy
factor for the training sample storage:
crit m 
          </p>
          <p>WFFc  WFBc
samp Inst  sampFeat
,
(4)
where WFFc is the number of direct ANN connections (WFFc  w1, w2 ,...,wi  );WFBc
is the number of feedback ANN connections (WFBc  w1, w2 ,...,wj ); samp Inst is the
number of instances at training set; samp Feat is the number of features at training set.</p>
          <p>If critm  1 , then the ANN memory is redundant (the ANN memory dimension is
greater than the sample size). If critm  1 , then the ANN can remember the entire
training sample (the memory dimension of the ANN is equal to the size of the training
sample). If critm  1 , then the ANN will not be able to remember exactly the entire
training sample (the memory dimension of the ANN is less than the dimension of the
training sample), but the ANN will show generalizing and approximating abilities.</p>
          <p>
            The use of the second criterion is related to the approximation properties of the
ANN. One of the most important characteristics of ANN models is the quality of
approximation. In the case where the error level of the models is one-to-one, the
approximation quality is higher in the model that uses fewer links [
            <xref ref-type="bibr" rid="ref33">33</xref>
            ]. The quality
coefficient of approximation of the neural network model is defined as the average share of
error attributable to the non-zero weights of the network:
crit a 
          </p>
          <p>Error
WFFc  WFBc  Ww0
where Error is the aggregate error allowed by the network (for example, root mean
square error) is such that Error  , where  is the maximum allowable error
(learning objective). As an Error it can be used a sample error ( Eнав. ) or a test sample
error as an error ( Eтест. ); WFFc is the number of direct connections of the ANN
( WFFc  w1, w2 ,...,wi  ); WFBc is the number of feedbacks for recurrent ANN
( WFBc  w1, w2 ,...,wj ); Ww0 are zero weights (ANN connections whose weight is</p>
          <p>Thus, we consider the modification of rank selection using criteria for evaluating
ANN moles.</p>
          <p>Selection begins by sorting (ranking) individuals based on their availability so that
FIndi   FInd j  for i  j . Each individual is then assigned a probability of being
selected ps , taken from a given restricted division  ps  1 . The probability of
sei
lection is calculated by the form:
psi </p>
          <p> a  a  b rank  crit m  crit a 1 
1 </p>
          <p> ,
n  n 1 
(6)
where a 1;2, b  2  a , rank is a rank of individuals in the sorted list of
individuals.</p>
          <p>The use of criteria in determining the probability of selection solves several
problems, namely:
─ advance convergence of the method;
─ reducing the variety;
─ selection of the best individuals, at the same rank (at the same value of the fitness
function).</p>
          <p>It has long been known that setting the probability of transmission of the parent
gene to the offspring in uniform crossing can significantly increase its efficiency, and
also allows to emulate other crossover operators (single-point, two-point). It is also
known that the use of a uniform crossing operator allows to apply the so-called
multiparent recombination, when to cross one child more than two parents. Uniform
crossing gives greater flexibility when combining strings, which is an important
advantage when working with genetic algorithms.</p>
          <p>Therefore, a uniform crossing with a specified parent pool size will be used as the
crossing operator. The pool will be filled with individuals selected using modified
rank selection. This approach adds flexibility to the method and allows to hope for a
change in the behavior of the method.
4</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Experiments</title>
      <p>
        Data for testing were taken from the open repository – UC Irvine Machine Learning
Repository. Data sample was used: Breast Cancer Coimbra Data Set [
        <xref ref-type="bibr" rid="ref34">34</xref>
        ]. Clinical
features were observed or measured for 64 patients with breast cancer and 52 healthy
controls. There are 10 predictors, all quantitative, and a binary dependent variable,
indicating the presence or absence of breast cancer. The predictors are anthropometric
data and parameters which can be gathered in routine blood analysis. Prediction
models based on these predictors, if accurate, can potentially be used as a biomarker of
breast cancer. Table 1 shows the main characteristics of the data sample. 75% of the
sample was used for training, 25% of the sample was used for testing.
─ the spent time, s;
─ average error of final network ( E );
─ the size of parent pool.
      </p>
      <p>The relative error value in this case will be calculated as the ratio of the
classification error to the total sample size (number of instances).</p>
      <p>E </p>
      <p>errorclass</p>
      <sec id="sec-3-1">
        <title>Numbersampl</title>
        <p>100% ,
(7)
where E is relative error; errorclass is classification error; Numbersampl the number
of instances in the sample.</p>
        <p>The following hardware and software have been used for experimental verification
of the proposed method for ANN synthesis: the computing system of the Department
of software tools of Zaporizhzhia Polytechnic National University (ZPNU),
Zaporizhzhya: Xeon processor E5-2660 v4 (14 cores), RAM 4x16 GB DDR4, the
programming model of Java threads.
5</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>The results analysis</title>
      <p>From the results of the experiment it can be seen that the most acceptable
performance of the method is observed at the size of the parent pool 2 and 7. In other cases
of parents, the ratio of resources used and time spent in the exact initial neuromodels
is not satisfactory. It can also be noted that the number of parent individuals for
crossing &gt;10 does not make sense, because with large values of the execution time and the
overhead of sending data, the accuracy deteriorates significantly.</p>
      <p>
        Moreover, it can be concluded that the use of selective pressure and uniform
crossing reduce the size of the population, without taking into account and without
considering those individuals of the population that are characterized by a small value of the
fitness function. Also, selective pressure allows to take into account additional quality
indicators of neural network models [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ]. This avoids the problem of identical ranks
for models with the same fitness function score.
6
      </p>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>The increasing of accuracy and reduction of memory and computing power costs for
storing and crossing the total population volume confirm the high efficiency of the
proposed modification. However, the growth of input parameters should be noted.
Therefore, the next step may be to automate the selection of input parameters,
depending on the problem and its boundaries.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Einarson</surname>
            ,
            <given-names>T.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Acs</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ludwig</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Panton</surname>
          </string-name>
          , U.H.:
          <article-title>Prevalence of cardiovascular disease in type 2 diabetes: a systematic literature review of scientific evidence from across the world in 2007-2017</article-title>
          .
          <source>Cardiovasc Diabetol</source>
          <volume>17</volume>
          (
          <issue>1</issue>
          ),
          <fpage>1</fpage>
          -
          <lpage>19</lpage>
          (
          <year>2018</year>
          ).
          <source>DOI: 10.1186/s12933-018-0728-6</source>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Petrie</surname>
            ,
            <given-names>J.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Guzik</surname>
            ,
            <given-names>T.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Touyz</surname>
            ,
            <given-names>R.M.</given-names>
          </string-name>
          :
          <string-name>
            <surname>Diabetes</surname>
          </string-name>
          ,
          <string-name>
            <surname>Hypertension</surname>
          </string-name>
          , and
          <article-title>Cardiovascular Disease: Clinical Insights and Vascular Mechanisms</article-title>
          .
          <source>The Canadian journal of cardiology 34(5)</source>
          ,
          <fpage>575</fpage>
          -
          <lpage>584</lpage>
          (
          <year>2017</year>
          ). DOI:
          <volume>10</volume>
          .1016/j.cjca.
          <year>2017</year>
          .
          <volume>12</volume>
          .005
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Thanassoulis</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vasan</surname>
          </string-name>
          , R.S.:
          <source>Genetic Cardiovascular Risk Prediction - Will We Get There? Circulation</source>
          <volume>122</volume>
          (
          <issue>22</issue>
          ),
          <fpage>2323</fpage>
          -
          <lpage>2334</lpage>
          (
          <year>2010</year>
          ).
          <source>DOI: 10.1161/CIRCULATIONAHA.109.909309</source>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Jungen</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scherschel</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eickholt</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kuklik</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Klatt</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <article-title>Disruption of cardiac cholinergic neurons enhances susceptibility to ventricular arrhythmias</article-title>
          .
          <source>Nature Communications</source>
          <volume>8</volume>
          (
          <year>2017</year>
          ).
          <source>DOI: 10.1038/ncomms14155</source>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Leoshchenko</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oliinyk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Subbotin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zaiko</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Using Modern Architectures of Recurrent Neural Networks for Technical Diagnosis of Complex Systems</article-title>
          .
          <source>In: Proceedings of the 2018 International Scientific-Practical Conference Problems of Infocommunications. Science</source>
          and
          <string-name>
            <surname>Technology (PIC S&amp;T)</surname>
          </string-name>
          , Kharkiv, Ukraine, pp.
          <fpage>411</fpage>
          -
          <lpage>416</lpage>
          (
          <year>2018</year>
          ). DOI:
          <volume>10</volume>
          .1109/INFOCOMMST.
          <year>2018</year>
          .8632015
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Volchek</surname>
            ,
            <given-names>Y.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shyshko</surname>
            ,
            <given-names>V.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Spiridonova</surname>
            ,
            <given-names>O.S.</given-names>
          </string-name>
          , Mokhort, Т.V.:
          <article-title>Position of the model of the artificial neural network in medical expert systems</article-title>
          .
          <source>Juvenis scientia (9)</source>
          ,
          <fpage>4</fpage>
          -
          <lpage>9</lpage>
          . Scientia, Saint
          <string-name>
            <surname>Petersburg</surname>
          </string-name>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Leoshchenko</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oliinyk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Subbotin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gorobii</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zaiko</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Synthesis of artificial neural networks using a modified genetic algorithm</article-title>
          .
          <source>Proceedings of the 1st International Workshop on Informatics &amp; Data-Driven Medicine (IDDM</source>
          <year>2018</year>
          ),
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          (
          <year>2018</year>
          ). dblp key: conf/iddm/PerovaBSKR18
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Leoshchenko</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oliinyk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Subbotin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gorobii</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shkarupylo</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Modification of the Genetic Method for Neuroevolution Synthesis of Neural Network Models for Medical Diagnosis</article-title>
          .
          <source>In: Proceedings of the Second International Workshop on Computer Modeling and Intelligent Systems (CMIS-2019)</source>
          , pp.
          <fpage>143</fpage>
          -
          <lpage>158</lpage>
          (
          <year>2019</year>
          ). dblp key: conf/cmis/LeoshchenkoOSGS19
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Cao</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ming</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gao</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>A review on neural networks with random weights</article-title>
          .
          <source>Neurocomputing</source>
          ,
          <volume>275</volume>
          ,
          <fpage>278</fpage>
          -
          <lpage>287</lpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Kieffer</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Babaie</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kalra</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tizhoosh</surname>
            ,
            <given-names>H.R.</given-names>
          </string-name>
          :
          <article-title>Convolutional neural networks for histopathology image classification: Training vs. using pre-trained networks</article-title>
          .
          <source>In 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)</source>
          ,
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Oliinyk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zaiko</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Subbotin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Training sample reduction based on association rules for neuro-fuzzy networks synthesis</article-title>
          .
          <source>Optical Memory and Neural Networks (Information Optics)</source>
          <volume>23</volume>
          (
          <issue>2</issue>
          ),
          <fpage>89</fpage>
          -
          <lpage>95</lpage>
          (
          <year>2014</year>
          ). DOI:
          <volume>10</volume>
          .3103/S1060992X14020039
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Shkarupylo</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skrupsky</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oliinyk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kolpakova</surname>
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Development of stratified approach to software defined networks simulation</article-title>
          .
          <source>EasternEuropean Journal of Enterprise Technologies</source>
          <volume>89</volume>
          (
          <issue>5</issue>
          /9)
          <fpage>67</fpage>
          -
          <lpage>73</lpage>
          (
          <year>2017</year>
          ). DOI:
          <volume>10</volume>
          .15587/
          <fpage>1729</fpage>
          -
          <lpage>4061</lpage>
          .
          <year>2017</year>
          .110142
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Kolpakova</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oliinyk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lovkin</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Improved method of group decision making in expert systems based on competitive agents selection</article-title>
          .
          <source>In: Proceedings of the IEEE First Ukraine Conference on Electrical and Computer Engineering (UKRCON)</source>
          , pp.
          <fpage>939</fpage>
          -
          <lpage>943</lpage>
          (
          <year>2017</year>
          ). DOI:
          <volume>10</volume>
          .1109/UKRCON.
          <year>2017</year>
          .8100388
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Stepanenko</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oliinyk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deineha</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zaiko</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Development of the method for decomposition of superpositions of unknown pulsed signals using the second­order adaptive spectral analysis</article-title>
          .
          <source>EasternEuropean Journal of Enterprise Technologies</source>
          <volume>2</volume>
          (
          <issue>9</issue>
          ),
          <fpage>48</fpage>
          -
          <lpage>54</lpage>
          (
          <year>2018</year>
          ). DOI:
          <volume>10</volume>
          .15587/
          <fpage>1729</fpage>
          -
          <lpage>4061</lpage>
          .
          <year>2018</year>
          .126578
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Yadav</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rani</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Murari</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Prospects and limitations of non-invasive blood glucose monitoring using near-infrared spectroscopy</article-title>
          .
          <source>Biomedical Signal Processing and Control</source>
          , Vol.
          <volume>18</volume>
          (
          <year>2015</year>
          ). doi:
          <volume>10</volume>
          .1016/j.bspc.
          <year>2015</year>
          .
          <volume>01</volume>
          .005
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Jiang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Trundle</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Ren</surname>
          </string-name>
          , J.:
          <article-title>Medical image analysis with artificial neural networks</article-title>
          .
          <source>Computerized medical imaging and graphics: the official journal of the Computerized Medical Imaging Society</source>
          <volume>34</volume>
          .
          <fpage>617</fpage>
          -
          <lpage>631</lpage>
          (
          <year>2010</year>
          ). DOI:
          <volume>10</volume>
          .1016/j.compmedimag.
          <year>2010</year>
          .
          <volume>07</volume>
          .003.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          , Zhang,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Mujumdar</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.S.:</surname>
          </string-name>
          <article-title>Recent developments of artificial intelligence in drying of fresh food: A review</article-title>
          .
          <source>Critical Reviews in Food Science and Nutrition</source>
          <volume>59</volume>
          (
          <issue>14</issue>
          ),
          <fpage>2258</fpage>
          -
          <lpage>2275</lpage>
          (
          <year>2019</year>
          ).
          <source>DOI: 10.1080/10408398</source>
          .
          <year>2018</year>
          .1446900
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Neuroevolution</surname>
          </string-name>
          <article-title>- evolving Artificial Neural Networks topology from the scratch</article-title>
          , https:// becominghuman.ai
          <article-title>/neuroevolution-evolving-artificial-neural-networks-topology-from-thescratch-d1ebc5540d84 - last accessed</article-title>
          <year>2019</year>
          /09/04
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Oliinyk</surname>
            ,
            <given-names>A.O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oliinyk</surname>
            ,
            <given-names>O.O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Subbotin</surname>
            ,
            <given-names>S.A.</given-names>
          </string-name>
          :
          <article-title>Software-hardware systems: Agent technologies for feature selection</article-title>
          .
          <source>Cybernetics and Systems Analysis</source>
          <volume>48</volume>
          (
          <issue>2</issue>
          ),
          <fpage>257</fpage>
          -
          <lpage>267</lpage>
          (
          <year>2012</year>
          ). DOI:
          <volume>10</volume>
          .1007/s10559-012-9405-z
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Oliinyk</surname>
            ,
            <given-names>A.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Subbotin</surname>
            ,
            <given-names>S.A.</given-names>
          </string-name>
          :
          <article-title>The decision tree construction based on a stochastic search for the neuro-fuzzy network synthesis</article-title>
          .
          <source>Optical Memory and Neural Networks (Information Optics)</source>
          <volume>24</volume>
          (
          <issue>1</issue>
          ),
          <fpage>18</fpage>
          -
          <lpage>27</lpage>
          (
          <year>2015</year>
          ). DOI:
          <volume>10</volume>
          .1007/s10559-012-9405-z
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Alsayaydeh</surname>
            ,
            <given-names>J.A.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shkarupylo</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bin</surname>
            <given-names>Hamid</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>M.S.</given-names>
            ,
            <surname>Skrupsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Oliinyk</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          :
          <article-title>Stratified model of the Internet of Things infrastructure</article-title>
          .
          <source>Journal of Engineering and Applied Sciences</source>
          <volume>13</volume>
          (
          <issue>20</issue>
          ),
          <fpage>8634</fpage>
          -
          <lpage>8638</lpage>
          (
          <year>2018</year>
          ). DOI:
          <volume>10</volume>
          .3923/jeasci.
          <year>2018</year>
          .
          <volume>8634</volume>
          .8638
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Oliinyk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Subbotin</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>A stochastic approach for association rule extraction</article-title>
          .
          <source>Pattern Recognition and Image Analysis</source>
          <volume>26</volume>
          (
          <issue>2</issue>
          ),
          <fpage>419</fpage>
          -
          <lpage>426</lpage>
          ,
          <year>2016</year>
          . DOI:
          <volume>10</volume>
          .1134/S1054661816020139.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Exploring Weight Agnostic Neural Networks</surname>
          </string-name>
          , https://ai.googleblog.com/
          <year>2019</year>
          /08/explor ing-weight
          <source>-agnostic-neural.html - last accessed 2019/09/01</source>
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Saini</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <article-title>Review of Selection Methods in Genetic Algorithms</article-title>
          .
          <source>International Journal of Engineering and Computer Science</source>
          <volume>6</volume>
          (
          <issue>12</issue>
          ),
          <fpage>22261</fpage>
          -
          <lpage>22263</lpage>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Jebari</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Madiafi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Selection Methods for Genetic Algorithms</article-title>
          .
          <source>International Journal of Emergency Services</source>
          <volume>3</volume>
          (
          <issue>4</issue>
          ),
          <fpage>333</fpage>
          -
          <lpage>344</lpage>
          (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Ma</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Quick convergence of genetic algorithm for QoS-driven web service selection</article-title>
          .
          <source>Computer Networks</source>
          <volume>52</volume>
          (
          <issue>5</issue>
          ),
          <fpage>1093</fpage>
          -
          <lpage>1104</lpage>
          (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Gao</surname>
            <given-names>W.:</given-names>
          </string-name>
          <article-title>An improved fast-convergent genetic algorithm</article-title>
          .
          <source>IEEE International Conference on Robotics, Intelligent Systems and Signal Processing</source>
          <volume>2</volume>
          ,
          <fpage>1197</fpage>
          -
          <lpage>1202</lpage>
          (
          <year>2003</year>
          ). DOI:
          <volume>10</volume>
          .1109/RISSP.
          <year>2003</year>
          .1285761
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Genetic</surname>
          </string-name>
          Algorithms - Crossover, https://www.tutorialspoint.com/genetic_algorithms/gen etic_algorithms_crossover.htm - last
          <source>accessed 2019/09/01</source>
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Mendes</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>A comparative study of crossover operators for genetic algorithms to solve the job shop scheduling problem</article-title>
          .
          <source>WSEAS Transactions on Computers</source>
          <volume>12</volume>
          ,
          <fpage>164</fpage>
          -
          <lpage>173</lpage>
          (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <given-names>Varun</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.G.</given-names>
            ,
            <surname>Panneerselvam</surname>
          </string-name>
          , Dr.R.:
          <article-title>A Study of Crossover Operators for Genetic Algorithms to Solve VRP and its Variants and New Sinusoidal Motion Crossover Operator</article-title>
          .
          <source>International Journal of Computational Intelligence Research</source>
          <volume>13</volume>
          (
          <issue>7</issue>
          ),
          <fpage>1717</fpage>
          -
          <lpage>1733</lpage>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <surname>Hassanat</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alkafaween</surname>
          </string-name>
          , E. On Enhancing Genetic Algorithms Using New Crossovers, https://arxiv.org/ftp/arxiv/papers/1801/
          <year>1801</year>
          .02335.pdf - last
          <source>accessed 2019/09/02</source>
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          32.
          <string-name>
            <surname>Umbarkar</surname>
            ,
            <given-names>Dr.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sheth</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Crossover operators in genetic algorithms: a review</article-title>
          .
          <source>ICTACT Journal on Soft Computing</source>
          <volume>6</volume>
          (
          <issue>1</issue>
          ),
          <fpage>1083</fpage>
          -
          <lpage>1092</lpage>
          (
          <year>2015</year>
          ).
          <source>DOI: 6</source>
          .
          <fpage>10</fpage>
          .21917/ijsc.
          <year>2015</year>
          .
          <volume>0150</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          33.
          <string-name>
            <surname>Subbotin</surname>
            <given-names>S.A.</given-names>
          </string-name>
          :
          <article-title>Criteria for Comparison of Recognition Models Based on Neural Networks and Analysis of Their Mutual Relations</article-title>
          .
          <source>Artificial Intelligence</source>
          <volume>1</volume>
          ,
          <fpage>142</fpage>
          -
          <lpage>152</lpage>
          (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          34.
          <string-name>
            <surname>Breast Cancer Coimbra Data Set</surname>
          </string-name>
          , https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+ Coimbra - last accessed
          <year>2019</year>
          /09/01
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          35.
          <string-name>
            <surname>Leoshchenko</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oliinyk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skrupsky</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Subbotin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lytvyn</surname>
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Parallel Genetic Method for the Synthesis of Recurrent Neural Networks for Using in Medicine</article-title>
          .
          <source>In: Proceedings of the Second International Workshop on Computer Modeling and Intelligent Systems (CMIS-2019)</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>17</lpage>
          (
          <year>2019</year>
          ). dblp key: conf/cmis/LeoshchenkoOSSL19
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>