<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Modification of the genetic method for neuroevolution synthesis of neural network models for medical diagnosis</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<affiliation key="aff0">
								<address>
									<addrLine>Sergey Subbotin 3, ], Nataliia Gorobii 4, -928X], Vadym Shkarupylo 5, ] 1</addrLine>
									<postCode>0000-0001-5814-8268, 0000-0003-2505, 0000-0002-0523-8910</postCode>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="department">Dept. of Software Tools</orgName>
								<orgName type="institution">Zaporizhzhia National Technical University</orgName>
								<address>
									<postCode>69063</postCode>
									<settlement>Zaporizhzhia</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<orgName type="department">Dept. of Computer Systems and Networks</orgName>
								<orgName type="institution">National University of Life and Environmental Sciences of Ukraine</orgName>
								<address>
									<postCode>03041</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Modification of the genetic method for neuroevolution synthesis of neural network models for medical diagnosis</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">E72F1231264AF4D951B641D34F957EAB</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T08:31+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>artificial neural networks</term>
					<term>synthesis</term>
					<term>neuroevolution</term>
					<term>genetic method</term>
					<term>support vector machine</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The main aim of the paper is researching the possibility of application artificial neural networks as the neural network models that can be used in medical diagnostics. One of the most problematic and complex issues of neural network models implementation is the initial stage of synthesis. The article presents a comparison of existing methods of synthesis, as well as a new method. The experiments confirm the effectiveness and expediency of the proposed method.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>The diagnosis stage plays a crucial role in medicine. Timely accurate diagnosis facilitates the choice of therapy and significantly increases the probability of treatment of the patient. The using of neural networks is one of the ways to improve the efficiency of medical diagnosis <ref type="bibr" target="#b0">[1]</ref>.</p><p>The accuracy of the diagnosis and the speed with which it can be delivered depend on many factors: the patient's condition, the available data on the symptoms and signs of the disease and the results of laboratory tests, the total amount of medical information on the observation of such symptoms in a variety of diseases and, finally, the qualification of the doctor. A major role in this process is played by the human factor, which often leads to errors <ref type="bibr" target="#b0">[1]</ref>, <ref type="bibr" target="#b1">[2]</ref>. Some of the specific medical diagnosis difficulties that need to be considered are listed below <ref type="bibr" target="#b2">[3]</ref>.</p><p>The basis for a reliable diagnosis is a wealth of practical experience, which can be reached only the middle of a doctor's career and is absent at the end of training, of course. This is especially true for rare or new diseases, where experienced doctors are in the same situation as beginners.</p><p>The quality of diagnosis depends on the skill, knowledge and intuition of the doctor.</p><p>Emotional problems and fatigue adversely affect the work of the doctor.</p><p>Training of specialists is a long and expensive procedure, and therefore in many, even in developed countries, there is a lack of skills.</p><p>Medicine is one of the fastest growing and developing fields of science. New results disqualify the old ones, new drugs appear every day. The same applies to the diseases themselves, which take new forms.</p><p>These factors necessitate the search for new solutions and tools, for example, the use of artificial neural networks (ANNs) <ref type="bibr" target="#b1">[2]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Using the ANN in medical diagnosis</head><p>The ANN technologies are designed to solve difficult-to-formalize problems, which, in particular, are reduced to many problems of medicine <ref type="bibr" target="#b3">[4]</ref>, <ref type="bibr" target="#b4">[5]</ref>. This is primarily due to the fact that the researcher is often provided with a large number of heterogeneous factual material for which a mathematical model has not yet been created. In addition, it is necessary to present the results of the analysis in a form, which will be understandable to the specialist. So ANN is a powerful and flexible method of simulating processes and phenomena. Neural networks can be different in structure and form, but they have several common features. A distinctive feature of neural networks is their ability to training on the basis of experimental data of the subject area. With regard to medical subjects, experimental data are presented in the form of a set of initial features or parameters of the object and the diagnosis based on them. Training of ANN is an interactive process in which the neural network finds hidden nonlinear relationships between the initial parameters and the final diagnosis, as well as the optimal combination of weight coefficients of neurons connecting adjacent layers, in which the classification error tends to a minimum <ref type="bibr" target="#b5">[6]</ref>. In the training process, the input of the neural network is fed a sequence of initial parameters along with the diagnoses that characterize these parameters. Careful formation of the training sample determines the quality of work, as well as the level of error of the neural network.</p><p>A number of difficulties are associated with the use of neural networks in practical problems. One of the main problems of application ANN technologies is a previously unknown degree of complexity of the projected ANN, which will be enough for a reliable diagnosis. This complexity can be unacceptably high and will require more complex network architecture. It is known, for example, that the simplest single-layer neural networks are able to solve only linearly separated problems <ref type="bibr" target="#b6">[7]</ref>. This limitation can be overcome by using multilayer neural networks.</p><p>The basis of ANNs are neurons with a structure similar to biological analogues. Each neuron can be represented as a microprocessor with several inputs and one output. When neurons are joined together, a structure is formed, which calls a neural network. Vertically aligned neurons form layers: input, hidden and output. The num-ber of layers determines the complexity and, at the same time, the functionality of the network, which is not fully investigated.</p><p>For researchers, the first stage of creating a network is the most difficult task. The following recommendations are given in the literature <ref type="bibr" target="#b7">[8]</ref><ref type="bibr" target="#b8">[9]</ref><ref type="bibr" target="#b9">[10]</ref>.</p><p>1. The number of neurons in the hidden layer is determined empirically, but in most cases the rule is used</p><formula xml:id="formula_0">o i h N N N  </formula><p>, where h N is the number of neurons in the hidden layer, i N in the input and o N output layers.</p><p>2. Increasing the number of inputs and outputs of the network leads to the need to increase the number of neurons in the hidden layer. 3. For the ANNs modeling multistage processes required additional hidden layer, but, on the other hand, the addition of hidden layers may lead to overwriting and the wrong decision at the output of the network.</p><p>Based on these recommendations, the number of layers and the number of neurons in the hidden layers is chosen by the researcher, based on his personal experience.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Review of the literature</head><p>The ANN are attractive from an intuitive point of view, because they are based on a primitive biological model of nervous systems. In this connection, there is an assumption that to improve it may be appropriate to apply another borrowing from nature, for example, evolutionary calculations and, in particular, neuroevolution. Neuroevolution in this paper refers to the automatic modification of neural networks using genetic algorithms. With this methodology, possible variations of neural networks with different topologies are grown, which with each iteration, called generation solve the problem better. Despite genetic programming, as well as evolutionary calculations in general, do not guarantee finding the optimal result, this approach eventually allows us to come to the results applicable to solving practical problems. However, it will take a reasonable amount of time to achieve such results. Thus, the level of complexity of the neural network that arises when it is necessary to create a neural network is significantly reduced, because when it is created, it is only necessary to select the parameter that evaluates the work of the neural network and provide a suitable set of data.</p><p>Despite the fact that most of the works devoted to the neuroevolutionary approach offer only a theoretical approach to solving problems of neural network optimization, it is possible to find several promising and noteworthy methods <ref type="bibr" target="#b10">[11]</ref><ref type="bibr" target="#b11">[12]</ref><ref type="bibr" target="#b12">[13]</ref><ref type="bibr" target="#b13">[14]</ref>.</p><p>From the early works of noteworthy cellular Frederick Gruau method [15], [16] uses a special grammar for the representation of neural network structures. One individual represented an entire neural network, with each neuron considered as a biological cell, and the growth of the network was determined through the mechanisms of sequential and parallel "division" of neurons -cells. However, this method involves the implementation of a large number of specific operators that provide simulation of cell activity.</p><p>The Hierarchical SANE (Symbiotic, Adaptive NeuroEvolution) <ref type="bibr" target="#b16">[17]</ref> method uses a different approach. It is consider the development of two independent populations, one of which individuals are separate neurons, and the other contains information about the structures of an artificial neural network. The disadvantages of this method include the fact that the number of hidden neurons and connections is limited.</p><p>The ESP method [18] is a development of the sane method. Its main difference is that the network structure is fixed and is given a priori. The population of neurons is divided into subpopulations, in each of which the evolution is independent. Due to parallelization of the solution search, as well as simplification of the problem due to the rejection of the evolution of the artificial neural network structure, ESP works much faster than SANE, sometimes by an order of magnitude, but for the successful operation of the method it is required to choose the appropriate structure of the neural network [19].</p><p>One of the most potentially successful attempts to get rid of the disadvantages of direct coding while preserving all its advantages is the method proposed in 2002, called NEAT -Neural Evolution through Augmenting Topologies [15], [20]. Designed by Kenneth Stanley, the NEAT method allows to customize the structure of the network, and without restrictions on its complexity. The solution proposed by the authors is based on the biological concept of homologous genes (alleles), as well as on the existence in nature of the synapsis process -the alignment of homologous genes before the crossover. The technique assumes that two genes (in two different individuals) are homologous if they are the result of the same mutation in the past. In other words, with each structural mutation (gene addition), a new gene is assigned a unique number, which then does not change during evolution. The method uses a number of techniques, such as historical labels and specialization of individuals, to make the process of evolution significantly more efficient [21].</p><p>Summing up, it can be noted that the joint use of evolutionary methods and artificial neural networks allows us to solve the problems of configuration and training of artificial neural networks both individually and simultaneously. One of the advantages of this synthesized approach is largely a unified approach to solving a variety of problems of classification, approximation, control and modeling. The use of qualitative evaluation of the functioning of artificial neural networks allows the use of neuroevolutionary methods to solve the problems of the study of adaptive behavior of intelligent agents, the search for game strategies, signal processing. Despite the fact that the number of problems and open questions concerning the development and application of neuroevolutionary methods (coding methods, genetic operators, methods of analysis, etc.) is large, often for the successful solution of the problem with the use of neuroevolutionary method adequate understanding of the problem and neuroevolutionary approach, as evidenced by a large number of interesting and successful works in this direction [15].</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Materials and methods</head><p>In the method, which is proposed to find a solution using a population of neural networks: . This allows for a uniform distribution of single and zero bits in the population to minimize the probability of early convergence of the method ( min  p ). After initialization, all individuals have coded networks in their genes with-out hidden neurons (N h ), and all input neurons (N i ) are connected to each output neuron (N o ). That is, at first, all the presented ANNs differ only in the weights of the interneuron connection wi. In the process of evaluation, based on the genetic information of the individual under consideration, a neural network is first built, and then its performance is checked, which determines the fitness function ( fitness f ) of the individual. After evaluation, all individuals are sorted in order of reduced fitness, and a more successful half of the sorted population is allowed to cross, with the best individual immediately moving to the next generation. In the process of reproduction, each individual is crossed with a randomly selected individual from among those selected for crossing. The resulting two descend-ants are added to the new generation</p><formula xml:id="formula_1">  n NN NN NN P ,..., , 2 1  , that</formula><formula xml:id="formula_2">  n Ind Ind Ind P G ,..., , `2 1   .</formula><p>Once a new generation is formed the mutation operator starts working. However, it is important to note that the selection of the truncation significantly reduces the diversity within the population, leading to an early convergence of the algorithm, so the probability of mutation is chosen to be rather large ( 25% -15  mut p</p><p>) <ref type="bibr" target="#b21">[22]</ref>. If the best individual in the population does not change for a certain number of generations (by default, it is proposed to set this number at eight), this individual is forcibly removed, and a new best individual is randomly selected from the queue. This makes it possible to realize the exit from the areas of local minima due to the relief of the objective function, as well as a large degree of convergence of individuals in one generation. The general scheme of the method demonstrated at Fig. <ref type="figure" target="#fig_0">1</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Using of genetic operators</head><p>It is obvious that the chosen method requires special genetic operators that implement crossover and mutation. At crossover two parental individuals which produce two descendants are used. Common neurons and connections are inherited by both offspring, and the value of connections in the networks of descendants are formed by a two-point crossover. Elements of ANN, of distinct played out between generations.</p><p>An important feature is that neurons with the same indices are considered identical, despite the different number of connections and position in the network, as well as the fact that one of these neurons could have a different index, which changed as a result of correction of indices after mutation. For this purpose, three coefficients were introduced that regulate the size and direction of the network.</p><p>The first of them characterizes the degree of connectedness of neurons in the network and is calculated by the formula:</p><formula xml:id="formula_3">          1 1 1 1 2 1         o o i i s s FB c con N N FB N N N N N f ( 1 )</formula><p>where c N is the number of connections in the network, i N , o N , s N are respectively, the number of input, output neurons and the total number of neurons in the network, FB is a variable indicating the permitted occurrence of feedbacks ( FB =1) or not ( FB =0). It is worth noting that connections from hidden neurons to the output can appear in any case. Thus, the smaller con f the more likely a new relationship will be added as a result of the mutation [23]. The use of the second coefficient is based on the assumption that the more elements in the sum of the input and output vectors of the training choice (the greater the total number of input and output neurons), which is probably a more complex network is necessary to solve the problem. The second coefficient is calculated by the formula:</p><formula xml:id="formula_4">s o i diff top N N N f   . (<label>2</label></formula><formula xml:id="formula_5">)</formula><p>That is, the more neurons in the network, the less will be diff top f . and the less likely will be selected mutation that adds a new hidden neuron <ref type="bibr" target="#b22">[23]</ref>.</p><p>The third criterion is also based on the assumption that a more complex network should be used to solve more complex problems. However, this criterion characterizes the conditional complexity of the network. This criterion is based on the concept of cyclomatic complexity <ref type="bibr" target="#b23">[24]</ref>, <ref type="bibr" target="#b24">[25]</ref>.</p><formula xml:id="formula_6">s o i diff comp N N N f   ;.</formula><p>( 3 ) For any of the described cases, the algorithm uses a ligament</p><formula xml:id="formula_7">diff comp diff top con f f f . .</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head> </head><p>, because for use it is necessary to take into account the degree of connectivity of already existing neurons. Thus, using mutations can be pointwise to change the parameters of the structure of the ins.</p><p>Chaotic the addition (removal) of neurons and connections can lead to situations where, for example, in a network of many neurons and few connections. It would be more logical to apply different types of mutations depending on the features of the network architecture represented by the mutating individual <ref type="bibr" target="#b25">[26]</ref><ref type="bibr" target="#b26">[27]</ref><ref type="bibr" target="#b27">[28]</ref>.</p><p>Removing links gives a side effect: there may be hanging neurons that have no incoming connections, as well as dead-end neurons, that is, without output connections <ref type="bibr" target="#b25">[26]</ref>, <ref type="bibr" target="#b26">[27]</ref>, <ref type="bibr" target="#b28">[29]</ref>. In cases where the function of neuronal activation is such that at zero weighted sum of inputs its value is not equal to zero, the presence of hanging neurons makes it possible to adjust the neural displacement. It is worth noting that, on the other hand, the removal of links may contribute to the removal of some uninformative and uninformative input features.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Choosing the mutation type</head><p>Consider the dependence of the type of mutation on the values con f , . This approach, on the one hand, does not limit the number of hidden neurons from above, on the other hand, it prevents the immeasurable increase of the network, because the addition of each new neuron to the network will be less likely. The mutation of the weight of a random existing bond occurs for all mutating individuals with a probability of 0.5.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head></head><p>is necessary in order to change the number of neurons adequately network topology, because the addition (removal) of neurons need information about the feasibility of changes. This information can be obtained indirectly from the value of the characteristic.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">The calculation of the output layer of ANN</head><p>On condition using the support vector machine, the optimality criterion for calculating the output weights may not be specified. If the value of the mean square error is replaced by the criterion of the maximum separation of the support vectors, then the optimal linear weights of the output can be estimated using, for example, quadratic programming, as in the traditional method of support vectors, for this it is advisable to use the Evoke operator <ref type="bibr" target="#b29">[30]</ref>, by the formula:</p><formula xml:id="formula_8">              k i l j i j i i j t K w w t y 1 0 , 0 ,  ,<label>(4)</label></formula><p>where</p><formula xml:id="formula_9">  n R t   is the output of a recurrent neural network    f at a time t ;    , K is a predefined kernel function; j i</formula><p>w , is weights corresponding to k training sequences i  , each length i l , and are calculated using the support vector machine. The value of the mean square error is replaced by the criterion of maximum separation of support vectors. In this case, the optimal linear weights can be estimated using quadratic programming, as in the traditional support vector machine.</p><p>One of the problems of neuroevolutionary method realization is the algorithm of ANN output calculation with arbitrary topology.</p><p>ANN can be represented as a directed planar graph. Based on the fact that the network structure can be any, loops and cycles containing any nodes are allowed in the graph, except for the nodes of the corresponding input neurons. Let denote the set of nodes of the graph by</p><formula xml:id="formula_10">    1 ; 0 |    v i N i v V</formula><p>, and a set of arcs through</p><formula xml:id="formula_11">    1 ; 0 |    e j N j e E</formula><p>, where v N and e N are accordingly, the number of nodes and arcs in the graph, and </p><formula xml:id="formula_12">  l k l k v v c , , </formula><p>, the weight of the corresponding link will be denoted by l k w , . Give the index to the nodes of the graph as neurons, that is, the nodes that are the input neurons, called input, have an index out of range  </p><formula xml:id="formula_13">1 ; 0  l N</formula><p>. By analogy, the indexes of outgoing nodes belong to the interval  </p><formula xml:id="formula_14">1 ;   o l l N N N</formula><p>, and indexes for hidden nodes will be set in the interval  </p><formula xml:id="formula_15">1 ;   v o l N N N</formula><p>. Let introduce an additional characteristic for all nodes of the graph equal to the minimum length of the chain to any of the input nodes and denote it i ch . Let's call i ch the layer to which the i th node belongs. Thus, all input nodes belong to the 0 th layer, not all input nodes that have input arcs from the input belong to the 1 st layer, all other nodes with input arcs from nodes of the 1 st layer will belong to the layer with index 2, etc .in this case, there may be situations when the node does not have input arcs, we will call it a hanging node with the layer number 1   i ch .</p><p>For arcs, we also introduce an additional characteristic l k b , for the arc l k c , , which is necessary to determine whether the arc corresponds to forward or reverse. It will be calculated as follows:</p><formula xml:id="formula_16">         0 , 1 0 , 1 , k l k l l k ch ch ch ch b ( 5 )</formula><p>That is, if the index of the layer of the end node of the arc is greater than the index of the layer of the beginning node, then we will consider such an arc as a straight line, otherwise we will consider the arc as an inverse.</p><p>Since each node of the graph represents a neuron, we denote by i sum the value of the weighted sum of inputs, and through i o is the value of the output (the value of the activation function of the i th neuron-node). Then,</p><formula xml:id="formula_17">  i fitness i sum f o </formula><p>where fitness f is the function of neuron activation.</p><p>Let's divide the whole process of signal propagation from the input nodes into stages, and during one such stage the signals manage to pass only one arc. The number of the stage is denoted by s. For the very first stage s=1. For short assumed that all arcs have the same length, and the signals are sewn on them instantly. We denote the feature that the output of node i was updated at this stage through i a , that is, if 1  i a , then the output of the node at stage s is calculated, otherwise, if 1  i a</p><p>-not. Let's introduce one more designation</p><formula xml:id="formula_18">    1 ; 0 |    l i N i x X</formula><p>it vector of input signals. Then the algorithm for calculating the ANN output is as follows:</p><formula xml:id="formula_19">1. i i x o  , 1  i a</formula><p>, for all</p><formula xml:id="formula_20">  1 ; 0   l N i ; 2. 0  i o , for all   1 ;   s l N N i ; 3. s=1; 4. 0  i sum , 1  i a</formula><p>, for all    </p><formula xml:id="formula_21">1. if 0  i ch</formula><p>, than go to the step number 3;</p><p>2. for all input arcs</p><formula xml:id="formula_22">l k c , node i v : if 1  k a</formula><p>, than</p><formula xml:id="formula_23">k i i o sum sum   , else ) (k fn ; 3.   i i sum f o  ; 4. exit.</formula><p>The stopping criterion of the ANN output calculation algorithm can be one of the following:</p><p>─ stabilization of values at the output of ANN; ─ s exceeds the set value.</p><p>It is more reliable to calculate the output until the values at the output of ANN do not change, but for the case when the network contains cycles and/or loops, its output may never become stable. Therefore, the required additional stopping criteria limiting the maximum number of stages of calculation of network output. For networks with no feedback ( FB =0) in many cases, allow the</p><formula xml:id="formula_24">  1 max  i ch phases.</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Experiments</head><p>During testing, the main task is to track the speed of the proposed method, quality and stability. Since synthesized ANN can be further used as diagnostic models for medical diagnosis, testing should be carried out on the relevant test data. Also, testing will be carried out in 2 stages: the first stage will consist in the synthesis of ANN only with the help of modified genetic algorithm, and the second -in additional processing of the initial layer by the support vector machine. This strategy will allow to know more clearly how useful the support vector machine is. Data for testing were taken from the open repository -UC Irvine Machine Learning Repository. Data sample was used: Breast Cancer Coimbra Data Set <ref type="bibr" target="#b30">[31]</ref>. Clinical features were observed or measured for 64 patients with breast cancer and 52 healthy controls. There are 10 predictors, all quantitative, and a binary dependent variable, indicating the presence or absence of breast cancer. The predictors are anthropometric data and parameters which can be gathered in routine blood analysis. Prediction models based on these predictors, if accurate, can potentially be used as a biomarker of breast cancer. Table <ref type="table" target="#tab_2">1</ref> shows the main characteristics of the data sample. During the evaluation of the test results we will pay attention to the following criteria:</p><p>─ the spent time, s; ─ average error of final network ( E ); ─ standard deviation (SD).</p><p>The relative error value in this case will be calculated as the ratio of the classification error to the total sample size (number of instances).</p><formula xml:id="formula_25">% 100   sampl class Number error E , (<label>6</label></formula><formula xml:id="formula_26">)</formula><p>where E is relative error; Standard deviation gives an idea about how one or the other, the ANN accurately predicts the user's rating, since the estimation is calculated the difference between the result of the work of ANN's and known result. It is also important to know that this indicator can be calculated only with a sufficient amount of observations. Otherwise, the calculation of the SD will be uninformative and its use will not lead to improvement of the results of the ANN. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head> </head><p>where SD is standard deviation, i x is i th element of the set, sampl Number the number of instances in the sample, x is the mean value of these observations <ref type="bibr" target="#b31">[32]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">The results analysis</head><p>Table <ref type="table" target="#tab_3">2</ref> shows the results of testing the modified genetic algorithm in comparison with the NEAT and ESP methods. As the table shows, the modified GA according to the time of fulfillment is ahead of ESP in time, however, inferior to the NEAT. However, it should be noted that in the analysis of error values, the proposed method is significantly ahead of existing methods.</p><p>Let's repeat testing, but now with additional use of the support vector machine. The results are shown in Table <ref type="table" target="#tab_4">3</ref>. As can be seen from the table, the modified GA using the support vector machine is inferior to the opponents in terms of execution time. However, on indicators of minimum, maximum errors and average errors of the output ANN significantly better than their competitors. Therefore, we can conclude that the use of the support vector machine really significantly improves the results of the synthesis. As you can see from the diagram, the modified genetic method was more iterative than the existing methods, but the time spent on iteration was less. That is, it can be concluded that iterations are not complex and for their reduction it is possible to resort to parallelization, which will significantly speed up the work even when using the support vector machine <ref type="bibr" target="#b32">[33]</ref><ref type="bibr" target="#b33">[34]</ref><ref type="bibr" target="#b34">[35]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">Conclusion</head><p>The problem of finding the optimal method of synthesis of ANN requires a comprehensive approach. Existing methods of ANNs training are well tested, but they have a number of nuances and disadvantages. The paper proposes a mechanism for the use a modified genetic algorithm for its subsequent application in the synthesis of ANNs.</p><p>Based on the analysis of the experimental results, it can be argued about the good work of the proposed method. However, to reduce iterativity and improve accuracy, it should be continued to work towards parallelization of calculations.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. The general scheme of the method</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>.</head><label></label><figDesc>Adaptive mutation mechanism is one of the key features of the proposed method.The choice of mutation type is determined based on the values of con f ,</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>hN</head><label></label><figDesc>is the number of hidden neurons in the mutating network.Conventionally, the entire algorithm can be divided into two branches on the first conditional transition: ─ branch increase c f is carried out for the fulfilment of the conditions of transition; ─ branch reduction c f , performed if the transition condition is not met.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Fig. 2 .</head><label>2</label><figDesc>Fig. 2. The diagram of the selection of the type of mutation</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head></head><label></label><figDesc> . The arc, which goes from node k to node 1 denote by an ordered pair</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Fig. 3 .</head><label>3</label><figDesc>Fig. 3. The general scheme of the calculation of the output layer of ANN</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head></head><label></label><figDesc>instances in the sample.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Fig. 4 .</head><label>4</label><figDesc>Fig. 4. The distribution of iterations in experiments</figDesc><graphic coords="13,175.32,371.16,244.44,132.48" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>is, each individual is a separate ANN</figDesc><table><row><cell>g</cell><cell>Ind i</cell><cell></cell><cell cols="2"> g</cell><cell>1</cell><cell cols="2"></cell><cell cols="2">, Rand 2 g</cell><cell></cell><cell>,..., Rand</cell><cell>g</cell><cell>n</cell><cell></cell><cell> Rand</cell><cell>. Genes of the second half of the popu-</cell></row><row><cell cols="15">lation are defined as the inversion of genes of the first half   Rand ,..., Rand , Rand 2 1     n Ind g g g g i</cell></row><row><cell cols="4">i Ind  genes g</cell><cell cols="2">NN Ind i</cell><cell>i</cell><cell cols="2"></cell><cell cols="6">[19-21]. During initialization population divided into two halves, the   n g g g ,..., , 2 1 of the first half of the individuals is randomly assigned</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head></head><label></label><figDesc>, than go to the step number 7; 6. calcultion of the feedback network. For all input feedbacks if the stop criterion is not met, than s=s+1 and go to the step number 4.</figDesc><table><row><cell>5. if k 7. if</cell><cell>1 s k ;   s N i ; l N 1  j c , node k v , where   j k k s l o sum sum N N     : 1 ; , if s ch j  ; 0  i a , than ) (i fn for all   1 ;   s l N N i ;</cell></row><row><cell>8.</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 1 .</head><label>1</label><figDesc>Main characteristics of the Breast Cancer Coimbra Data Set</figDesc><table><row><cell>Criterion</cell><cell>Characteristic</cell><cell>Criterion</cell><cell>Characteristic</cell></row><row><cell cols="2">Data Set Characteristics Multivariate</cell><cell>Number of Instances</cell><cell>116</cell></row><row><cell cols="2">Attribute Characteristics Integer</cell><cell>Number of Attributes</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 2 .</head><label>2</label><figDesc>Results of testing</figDesc><table><row><cell></cell><cell>Time, s</cell><cell>E</cell><cell>SD</cell></row><row><cell>NEAT</cell><cell>468.013</cell><cell>4.40%</cell><cell>3.70</cell></row><row><cell>ESP</cell><cell>9389.55</cell><cell>3.07%</cell><cell>2.99</cell></row><row><cell>Modified GA</cell><cell>631.373</cell><cell>2.96%</cell><cell>3.05</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 3 .</head><label>3</label><figDesc>Results of testing</figDesc><table><row><cell></cell><cell>Time, s</cell><cell>E</cell><cell>SD</cell></row><row><cell>NEAT</cell><cell>468.013</cell><cell>4.40%</cell><cell>3.70</cell></row><row><cell>ESP</cell><cell>9389.55</cell><cell>3.07%</cell><cell>2.99</cell></row><row><cell>Modified GA (with support vector machines)</cell><cell>5326.326</cell><cell>2.36%</cell><cell>2.17</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgment</head><p>The work was performed as part of the project "Methods and means of decisionmaking for data processing in intellectual recognition systems" (number of state registration 0117U003920) of Zaporizhzhia National Technical University.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m">and Medicine: Improving Diagnosis in Health Care</title>
				<editor>
			<persName><forename type="first">E</forename><surname>Balogh</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">B</forename><surname>Miller</surname></persName>
		</editor>
		<editor>
			<persName><surname>Ball</surname></persName>
		</editor>
		<meeting><address><addrLine>Washington, USA</addrLine></address></meeting>
		<imprint>
			<publisher>J. National Academies Press</publisher>
			<date type="published" when="2015">2015</date>
		</imprint>
		<respStmt>
			<orgName>Committee on Diagnostic Error in Health Care, Board on Health Care Services, Institute of Medicine ; The National Academies of Sciences, Engineering</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Artificial neural networks in medical diagnostics [Iskusstvennyie neyronnyie seti v meditsinskoy diagnostike</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Lugovskaya</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computer systems and networks: proceedings of the 54th scientific conference of postgraduates, undergraduates and students 2018</title>
				<meeting><address><addrLine>BGUIR, Minsk</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="182" to="183" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">The doctor-patient relationship: challenges, opportunities, and strategies</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">D</forename><surname>Goold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lipkin</surname><genName>Jr</genName></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of General Internal Medicine</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="23" to="33" />
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Improved method of group decision making in expert systems based on competitive agents selection</title>
		<author>
			<persName><forename type="first">T</forename><surname>Kolpakova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Oliinyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Lovkin</surname></persName>
		</author>
		<idno type="DOI">10.1109/UKRCON.2017.8100388</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE First Ukraine Conference on Electrical and Computer Engineering (UKRCON)</title>
				<meeting><address><addrLine>Kyiv</addrLine></address></meeting>
		<imprint>
			<publisher>Institute of Electrical and Electronics Engineers</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="939" to="943" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Development of the method for decomposition of superpositions of unknown pulsed signals using the secondorder adaptive spectral analysis</title>
		<author>
			<persName><forename type="first">O</forename><surname>Stepanenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Oliinyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Deineha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Zaiko</surname></persName>
		</author>
		<idno type="DOI">10.15587/1729-4061.2018.126578</idno>
	</analytic>
	<monogr>
		<title level="j">EasternEuropean Journal of Enterprise Technologies</title>
		<imprint>
			<biblScope unit="volume">92</biblScope>
			<biblScope unit="issue">2/9</biblScope>
			<biblScope unit="page" from="48" to="54" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">The Essence of Neural Networks (The Essence of Computing Series</title>
		<author>
			<persName><forename type="first">R</forename><surname>Callan</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1999">1999</date>
			<publisher>Prentice Hall</publisher>
			<pubPlace>, Europe</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Introduction to artificial intelligence [Vvedenie v iskusstvennyiy intellekt</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">N</forename><surname>Yasnitskiy</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2010">2010</date>
			<publisher>Academy</publisher>
			<pubPlace>Moscow</pubPlace>
		</imprint>
	</monogr>
	<note>3th edn</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Cintez optimalnyih iskusstvennyih neyronnyih setey s pomoschyu modifitsirovannogo geneticheskogo algoritma</title>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">B</forename><surname>Bondarenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yu</forename><forename type="middle">A</forename><surname>Gatchin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">N</forename><surname>Geranichev</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Nauchnotehnicheskiy vestnik informatsionnyih tehnologiy</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="issue">78</biblScope>
			<biblScope unit="page" from="51" to="55" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
	<note>mehaniki i optiki</note>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">Sintez optimalnoy strukturyi neyrosetevyih ustroystv</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">V</forename><surname>Lukichev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Usoltsev</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="97" to="102" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Approximation contexts in addressing graph data structures</title>
		<author>
			<persName><forename type="first">N</forename><surname>Van Tuc</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="30" to="55" />
		</imprint>
		<respStmt>
			<orgName>University of Wollongong Thesis Collection</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Parallel Method of Production Rules Extraction Based on Computational Intelligence</title>
		<author>
			<persName><forename type="first">A</forename><surname>Oliinyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Skrupsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Subbotin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Korobiichuk</surname></persName>
		</author>
		<idno type="DOI">10.3103/S0146411617040058</idno>
	</analytic>
	<monogr>
		<title level="j">Aut. Control Comp. Sci</title>
		<imprint>
			<biblScope unit="volume">51</biblScope>
			<biblScope unit="page" from="215" to="223" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Development of the indicator set of the features informativeness estimation for recognition and diagnostic model synthesis</title>
		<author>
			<persName><forename type="first">A</forename><surname>Oliinyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Subbotin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Lovkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Leoshchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Zaiko</surname></persName>
		</author>
		<idno type="DOI">10.1109/TCSET.2018.8336342</idno>
	</analytic>
	<monogr>
		<title level="m">14th International Conference on Advanced Trends in Radioelectronics</title>
				<meeting><address><addrLine>TCSET</addrLine></address></meeting>
		<imprint>
			<publisher>Telecommunications and Computer Engineering</publisher>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="903" to="908" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Feature Selection Based on Parallel Stochastic Computing</title>
		<author>
			<persName><forename type="first">A</forename><surname>Oliinyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Subbotin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Lovkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Leoshchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Zaiko</surname></persName>
		</author>
		<idno type="DOI">10.1109/STC-CSIT.2018.8526729</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 13th International Scientific and Technical Conference on Computer Sciences and Information Technologies (CSIT)</title>
				<meeting><address><addrLine>Lviv</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="347" to="351" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Evolutionary method for solving the traveling salesman problem. Problems of Infocommunications</title>
		<author>
			<persName><forename type="first">A</forename><surname>Oliinyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Fedorchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Stepanenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Goncharenko</surname></persName>
		</author>
		<idno type="DOI">10.1109/INFOCOMMST.2018.8632033</idno>
	</analytic>
	<monogr>
		<title level="m">Science and Technology : 5th International Scientific-Practical Conference PICST2018</title>
				<meeting><address><addrLine>Kharkiv; Kharkiv, Kharkiv</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018-10-12">9-12 October 2018. 2018</date>
			<biblScope unit="page" from="331" to="339" />
		</imprint>
		<respStmt>
			<orgName>National University of Radioelectronics</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Evolutionary Algorithms Design: State of the Art and Future Perspectives</title>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">R</forename><surname>Tsoy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of IEEE East-West Design and Test Workshop (EWDTW&apos;06)</title>
				<meeting>IEEE East-West Design and Test Workshop (EWDTW&apos;06)<address><addrLine>Sochi, Russia</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="375" to="379" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Genetic synthesis of Boolean neural networks with a cell rewriting developmental process</title>
		<author>
			<persName><forename type="first">F</forename><surname>Gruau</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the International Workshop on Combination of Genetic Algorithms and Neural Networks (COGANN-92)</title>
				<meeting>the International Workshop on Combination of Genetic Algorithms and Neural Networks (COGANN-92)<address><addrLine>Los Alamos, CA</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE Computer Society Press</publisher>
			<date type="published" when="1992">1992</date>
			<biblScope unit="page" from="55" to="74" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Hierarchical evolution of neural networks</title>
		<author>
			<persName><forename type="first">D</forename><surname>Moriarty</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>David</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Miikkulainen</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICEC.1998.699793</idno>
	</analytic>
	<monogr>
		<title level="m">Evolutionary Computation Proceedings</title>
				<imprint>
			<date type="published" when="1998">1998</date>
			<biblScope unit="page" from="428" to="433" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Numerical optimization with neuroevolution</title>
		<author>
			<persName><forename type="first">B</forename><surname>Greer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hakonen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Lahdelma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Miikkulainen</surname></persName>
		</author>
		<idno type="DOI">10.1109/CEC.2002.1006267</idno>
	</analytic>
	<monogr>
		<title level="j">Evolutionary Computation CEC &apos;</title>
		<imprint>
			<biblScope unit="volume">02</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="396" to="401" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<ptr target="http://blog.otoro.net/2015/03/10/esp-algorithm-for-double-pendulum" />
		<title level="m">Enforced Subpopulations (ESP) neuroevolution algorithm for balancing inverted double pendulum</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Evolving Neural Networks through Augmenting Topologies</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">O</forename><surname>Stanley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Miikkulainen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The MIT Press Journals</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="99" to="127" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Stochastic optimization for collision selection in high energy physics</title>
		<author>
			<persName><forename type="first">S</forename><surname>Whiteson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Whiteson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 19th national conference on Innovative applications of artificial intelligence, IAAI&apos;07</title>
				<meeting>the 19th national conference on Innovative applications of artificial intelligence, IAAI&apos;07</meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="1819" to="1825" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Long Short-Term Memory</title>
		<author>
			<persName><forename type="first">S</forename><surname>Hochreiter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Schmidhuber</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Computation</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page" from="1735" to="1780" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Development of a genetic algorithm for setting up an artificial neural network [Razrabotka geneticheskogo algoritma nastroyki iskusstvennoy neyronnoy seti</title>
		<author>
			<persName><forename type="first">Tsoy</forename><forename type="middle">R</forename><surname>Yu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Tomskiy politehnicheskiy universitet</title>
		<imprint>
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Mathematical modelling of land use and landscape complexity with ultrametric topology</title>
		<author>
			<persName><forename type="first">F</forename><surname>Papadimitriou</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Land Use Science</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="1" to="21" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Artificial Intelligence in modelling the complexity of Mediterranean landscape transformations</title>
		<author>
			<persName><forename type="first">F</forename><surname>Papadimitriou</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computers and Electronics in Agriculture</title>
		<imprint>
			<biblScope unit="page" from="87" to="96" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Synthesis of artificial neural networks using a modified genetic algorithm</title>
		<author>
			<persName><forename type="first">S</forename><surname>Leoshchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Oliinyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Subbotin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Gorobii</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Zaiko</surname></persName>
		</author>
		<ptr target=":conf/iddm/PerovaBSKR18" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 1st International Workshop on Informatics &amp; Data-Driven Medicine</title>
				<meeting>the 1st International Workshop on Informatics &amp; Data-Driven Medicine<address><addrLine>IDDM</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="1" to="13" />
		</imprint>
	</monogr>
	<note>dblp key</note>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Analysis of inrush currents of the unloaded transformer using the circuitfield modelling methods</title>
		<author>
			<persName><forename type="first">D</forename><surname>Yarymbash</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yarymbash</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kotsur</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Divchuk</surname></persName>
		</author>
		<idno type="DOI">10.15587/1729-4061.2018.134248</idno>
	</analytic>
	<monogr>
		<title level="j">Eastern-European Journal of Enterprise Technologies</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="6" to="11" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Enhancing the effectiveness of calculation of parameters for short circuit of threephase transformers using field simulation methods</title>
		<author>
			<persName><forename type="first">D</forename><surname>Yarymbash</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yarymbash</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kotsur</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Divchuk</surname></persName>
		</author>
		<idno type="DOI">10.15587/1729-4061.2018.140236</idno>
	</analytic>
	<monogr>
		<title level="j">Eastern-European Journal of Enterprise Technologies</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="22" to="28" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Development of stratified approach to software defined networks simulation</title>
		<author>
			<persName><forename type="first">V</forename><surname>Shkarupylo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Skrupsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Oliinyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kolpakova</surname></persName>
		</author>
		<idno type="DOI">10.15587/1729-4061.2017.110142</idno>
	</analytic>
	<monogr>
		<title level="j">EasternEuropean Journal of Enterprise Technologies</title>
		<imprint>
			<biblScope unit="volume">89</biblScope>
			<biblScope unit="issue">5/9</biblScope>
			<biblScope unit="page" from="67" to="73" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Training Recurrent Networks by Evolino</title>
		<author>
			<persName><forename type="first">J</forename><surname>Schmidhuber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Wierstra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gagliolo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Gomez</surname></persName>
		</author>
		<idno type="DOI">10.1162/neco.2007.19.3.757</idno>
	</analytic>
	<monogr>
		<title level="j">Neural computation</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="757" to="779" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<monogr>
		<ptr target="https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Coimbra" />
		<title level="m">Breast Cancer Coimbra Data Set</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Statistics notes: measurement error</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Bland</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">G</forename><surname>Altman</surname></persName>
		</author>
		<idno type="DOI">10.1136/bmj.312.7047.1654</idno>
	</analytic>
	<monogr>
		<title level="j">BMJ</title>
		<imprint>
			<biblScope unit="volume">312</biblScope>
			<biblScope unit="page">1654</biblScope>
			<date type="published" when="1996">7047. 1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Parallel data reduction method for complex technical objects and processes</title>
		<author>
			<persName><forename type="first">A</forename><surname>Oliinyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Leoshchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Lovkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Subbotin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Zaiko</surname></persName>
		</author>
		<idno type="DOI">10.1109/DESSERT.2018.8409184IEEECatalognumber:CFP18P47-ART978-1-5386-5903-8</idno>
	</analytic>
	<monogr>
		<title level="m">9th International Conference on Dependable Systems, Services and Technologies (DESSERT</title>
				<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="526" to="532" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Methods of semantic proximity extraction between the lexical units in infocommunication systems. 4 th International Scientific-Practical Conference Problems of Infocommunications</title>
		<author>
			<persName><forename type="first">S</forename><surname>Leoshchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Oliinyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Subbotin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Zaiko</surname></persName>
		</author>
		<idno type="DOI">10.1109/INFOCOMMST.2017.8246137</idno>
	</analytic>
	<monogr>
		<title level="j">Science and Technology (PIC S&amp;T</title>
		<imprint>
			<biblScope unit="page" from="7" to="13" />
			<date type="published" when="2017">2017. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Stratified Model of the Internet of Things Infrastructure</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Alsayaydeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Shkarupylo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Hamid</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Skrupsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Oliinyk</surname></persName>
		</author>
		<idno type="DOI">10.3923/jeasci.2018.8634.8638</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Engineering and Applied Science</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="issue">20</biblScope>
			<biblScope unit="page" from="8634" to="8638" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
