<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Dynamic Stock Buffer Management Method Based on Linguistic Constructions</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Eugene</forename><surname>Fedorov</surname></persName>
							<email>fedorovee@ukr.net</email>
							<affiliation key="aff0">
								<orgName type="institution">Cherkasy State Technological University</orgName>
								<address>
									<addrLine>Shevchenko blvd</addrLine>
									<postCode>460, 18006</postCode>
									<settlement>Cherkasy</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Olga</forename><surname>Nechyporenko</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Cherkasy State Technological University</orgName>
								<address>
									<addrLine>Shevchenko blvd</addrLine>
									<postCode>460, 18006</postCode>
									<settlement>Cherkasy</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Dynamic Stock Buffer Management Method Based on Linguistic Constructions</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">7CC778C91D876E564E519A2B9ECD8CB3</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T11:51+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>dynamic stock buffer management</term>
					<term>theory of constraints</term>
					<term>artificial neural network</term>
					<term>fuzzy inference systems</term>
					<term>genetic algorithm</term>
					<term>linguistic constructions</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The paper proposes a dynamic stock buffer management method based on linguistic constructions. The novelty of the research lies in the fact that a control system based on fuzzy logic and linguistic constructions was created for the dynamic stock buffer management and two artificial neuro-fuzzy network models were created. Three criteria for evaluating the effectiveness were selected and the parameters of the proposed models were identified based on the backpropagation algorithm in batch mode and the genetic algorithm, which are oriented on the parallel information processing technology. The proposed models and procedures for their parametric identification make it possible to increase the speed, accuracy and reliability of decision making. The proposed dynamic stock buffer management method based on linguistic constructions can be used in various intelligent systems that exercise control in natural language.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Currently, one of the most pressing problems in the field of processing natural language structures is insufficiently high speed, adequacy and probability of correct recognition <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. This leads to the fact that the dynamic objects management in natural language can be ineffective. Therefore, the development of methods that increase the efficiency of using linguistic structures for managing objects is an urgent task.</p><p>In this work, as a field of application of natural language constructions, we have chosen dynamic stock buffer management, which is used for supply chain management and is based on the theory of constraints <ref type="bibr" target="#b2">[3]</ref><ref type="bibr" target="#b3">[4]</ref><ref type="bibr" target="#b4">[5]</ref>.</p><p>To date, there are no computer systems for dynamic stock buffer management, which are based on soft computing and linguistic constructions.</p><p>At present, artificial intelligence methods are used to control dynamic objects, with artificial neural networks as the most popular <ref type="bibr" target="#b5">[6]</ref><ref type="bibr" target="#b6">[7]</ref><ref type="bibr" target="#b7">[8]</ref>.</p><p>The advantages of neural networks are <ref type="bibr" target="#b8">[9]</ref><ref type="bibr" target="#b9">[10]</ref><ref type="bibr" target="#b10">[11]</ref>:</p><p> the possibility of their training and adaptation;  the ability to identify patterns in data, their generalization, i.e. extracting knowledge from data, so no knowledge of the object is required (for example, its mathematical model);</p><p> the parallel processing of information that increases computing power. The disadvantages of neural networks are <ref type="bibr" target="#b11">[12]</ref><ref type="bibr" target="#b12">[13]</ref><ref type="bibr" target="#b13">[14]</ref>:  the difficulty in determining the structure of the network, since there are no algorithms for calculating the number of layers and neurons in each layer for specific applications;</p><p> the difficulty in forming a representative sample;  the high probability of the training and adaptation method hitting a local extremum;  the inaccessibility for human understanding the knowledge accumulated by the network, since they are distributed among all elements of the neural network and are presented in the form of its weight coefficients. Recently, neural networks have been combined with fuzzy inference systems. The advantages of fuzzy inference systems are <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b15">16]</ref>:  representation of knowledge in the form of rules that are easily understandable by a person;  no need for an accurate estimate of variable objects (incomplete and inaccurate data). The disadvantages of fuzzy inference systems are <ref type="bibr" target="#b16">[17]</ref><ref type="bibr" target="#b17">[18]</ref><ref type="bibr" target="#b18">[19]</ref>:  impossibility of their training and adaptation (parameters of membership functions cannot be automatically adjusted);  impossibility of parallel processing of information, which increases computing power.</p><p>Since metaheuristics <ref type="bibr" target="#b19">[20]</ref><ref type="bibr" target="#b20">[21]</ref><ref type="bibr" target="#b21">[22]</ref> and, in particular, genetic algorithms can be used to train the parameters of membership functions instead of neural network learning algorithms, let's note their advantages and disadvantages.</p><p>The advantage of genetic algorithms for training neural networks is <ref type="bibr" target="#b22">[23,</ref><ref type="bibr" target="#b23">24]</ref> a decrease in the probability of hitting a local extremum.</p><p>The disadvantages of genetic algorithms for training neural networks are <ref type="bibr" target="#b24">[25]</ref><ref type="bibr" target="#b25">[26]</ref><ref type="bibr" target="#b26">[27]</ref>:  the speed of the solution search method is lower than that of the neural network training methods;  in the case of binary genes, an increase in the search space reduces the accuracy of the solution with a constant chromosome length;  in the case of binary genes, there are encoding / decoding operations that reduce the algorithm speed.</p><p>Due to this, creating a dynamic stock buffer management method, which will eliminate the indicated disadvantages, is an urgent task.</p><p>The aim of the work is to increase the efficiency of dynamic stock buffer management with an artificial neuro-fuzzy network, which is trained based on a genetic algorithm.</p><p>To achieve this goal, it is necessary to solve the following tasks:</p><p>1. Creation of a fuzzy dynamic stock buffer management system. 2. Creation of mathematical models of an artificial neuro-fuzzy network for dynamic stock buffer management.</p><p>3. The choice of criteria for evaluating the effectiveness of mathematical models of an artificial neuro-fuzzy network for dynamic stock buffer management. 4. Identification of the parameters of the mathematical model of an artificial neuro-fuzzy network for dynamic stock buffer management based on the backpropagation algorithm in batch mode. 5. Identification of the parameters of the mathematical model of an artificial neuro-fuzzy network for the dynamic stock buffer management based on a genetic algorithm.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Creation of a fuzzy dynamic stock buffer management system</head><p>For the dynamic stock buffer management, a fuzzy inference system has been further improved in the work, which provides the representation of knowledge about stock buffer management in the form of rules with linguistic constructs that are easily accessible for human understanding, and involves the following steps:</p><p> formation of linguistic variables;  formation of a base of fuzzy rules;  fuzzification;  aggregation of subconditions;  activation of conclusions;  aggregation of conclusions;  defuzzification.</p><p>As clear input variables the following were chosen:  current stock size 1 x in pieces;  the time spent in the red zone of the stock buffer 2</p><p>x in units of time;  the time spent in the green zone of the stock buffer 3</p><p>x in units of time. As linguistic input variables the following were chosen:  the depth of the stay in the red zone of the stock buffer 1</p><p>x , depending on the current size of the stock, with its values small big   </p><formula xml:id="formula_0">x x A A   , )} ( 1 | { ~3 3 32 31 x x A A    .</formula><p>As a clear output variable, the number of the action type was chosen that changes the size of the stock buffer y . As a linguistic output variable, we chose the action y , which changes the size of the stock buffer, with its values unchange decrease increase    </p><formula xml:id="formula_1">)} ( | { ~1 1 y y B B   , )} ( | { ~2 2 y y B B   , )} ( | { ~3 3 y y B B   .</formula><p>The proposed fuzzy rules take into account all possible states of the stock buffer (all possible combinations of the values of the input linguistic variables) and the corresponding actions: : R corresponds to the following knowledge: if the depth of the stay of the stock buffer in the red zone is large and the stay of the stock buffer in the red zone is long and the stay of the stock buffer in the green zone is short, then increase the size of the stock buffer.</p><p>Lets determine the degree of truth of each subcondition of each rule using the membership function )</p><formula xml:id="formula_2">( ~i A x ij  .</formula><p>The Gaussian function was chosen as the membership functions of the subconditions, i.e.</p><formula xml:id="formula_3">                   2 1 1 ~2 1 exp ) ( 1 i i i i A m x x i   , 3 , 1  i , ) ( 1 ) ( 1 2 ~i A i A x x i i     , 3 , 1  i , where ij m -expected value, ij  -standard deviation.</formula><p>The membership functions of the conditions take into account all possible states of the stock buffer (all possible combinations of the values of linguistic variables) and are defined as</p><formula xml:id="formula_4">) ( ) ( ) ( ) ( 3 2 1 ~31 21 11 1 x x x x A A A A      , ) ( ) ( ) ( ) ( 3 2 1 ~32 21 11 2 x x x x A A A A      , ) ( ) ( ) ( ) ( 3 2 1 ~31 22 11 3 x x x x A A A A      , ) ( ) ( ) ( ) ( 3 2 1 ~32 22 11 4 x x x x A A A A      , ) ( ) ( ) ( ) ( 3 2 1 ~31 11 12 5 x x x x A A A A      , ) ( ) ( ) ( ) ( 3 2 1 ~32 21 12 6 x x x x A A A A      , ) ( ) ( ) ( ) ( 3 2 1 ~31 22 12 7 x x x x A A A A      , ) ( ) ( ) ( ) ( 3 2 1 ~32 22 12 8 x x x x A A A A      or as )} ( ), ( ), ( min{ ) ( 3 2 1 ~31 21 11 1 x x x x A A A A      , )} ( ), ( ), ( min{ ) ( 3 2 1 ~32 21 11 2 x x x x A A A A      , )} ( ), ( ), ( min{ ) ( 3 2 1 ~31 22 11 3 x x x x A A A A      , )} ( ), ( ), ( min{ ) ( 3 2 1 ~32 22 11 4 x x x x A A A A      , )} ( ), ( ), ( min{ ) ( 3 2 1 ~31 11 12 5 x x x x A A A A      , )} ( ), ( ), ( min{ ) ( 3 2 1 ~32 21 12 6 x x x x A A A A      , )} ( ), ( ), ( min{ ) ( 3 2 1 ~31 22 12 7 x x x x A A A A      , )} ( ), ( ), ( min{ ) ( 3 2 1 ~32 22 12 8 x x x x A A A A      .</formula><p>The membership functions of conclusions connect all possible states of the stock buffer (all possible combinations of values of linguistic variables) with the corresponding actions and are defined in the form</p><formula xml:id="formula_5">1 ~) ( ) ( ) , ( 3 1 1 F z x z x B A C     , 2 ~) ( ) ( ) , ( 1 2 2 F z x z x B A C     , 3 ~) ( ) ( ) , ( 2 3 3 F z x z x B A C     , 4 ~) ( ) ( ) , ( 1 4 4 F z x z x B A C     , 5 ~) ( ) ( ) , ( 2 5 5 F z x z x B A C     , 6 ~) ( ) ( ) , ( 1 6 6 F z x z x B A C     , 7 ~) ( ) ( ) , ( 2 7 7 F z x z x B A C     , 8 ~) ( ) ( ) , ( 3 8 8 F z x z x B A C     or as 1 ~)} ( ), ( min{ ) , ( 3 1 1 F z x z x B A C     , 2 ~)} ( ), ( min{ ) , ( 1 2 2 F z x z x B A C     , 3 ~)} ( ), ( min{ ) , ( 2 3 3 F z x z x B A C     , 4 ~)} ( ), ( min{ ) , ( 1 4 4 F z x z x B A C     , 5 ~)} ( ), ( min{ ) , ( 2 5 5 F z x z x B A C     , 6 ~)} ( ), ( min{ ) , ( 1 6 6 F z x z x B A C     , ~)} ( ), ( min{ ) , ( 2 7 7 F z x z x B A C     , 8 ~)} ( ), ( min{ ) , ( 3 8 8 F z x z x B A C     .</formula><p>In this work, membership functions</p><formula xml:id="formula_6">) (z k B </formula><p>and weighting coefficients of fuzzy rules r F are defined as</p><formula xml:id="formula_7">        k z k z k z z k B , 0 , 1 ] [ ) (  , 3 , 1  k , 1 8 7 6 5 4 3 2 1         F F F F F F F F .</formula><p>The membership function of the final conclusion is defined as ))</p><p>, (</p><p>,</p><formula xml:id="formula_9">( 8 1 ~z x z x z x C C C          , 3 , 1  z or as )} , ( ),..., ,<label>( max{ ) , ( 8 1</label></formula><formula xml:id="formula_10">~z x z x z x C C C     , 3 , 1  z</formula><p>To obtain the number of the type of action that changes the size of the stock buffer, the maximum membership function method is used )</p><formula xml:id="formula_11">, ( max arg * z x z C z   , 3 , 1  z .</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Creation of mathematical models of an artificial neuro-fuzzy network for dynamic stock buffer management</head><p>For the dynamic stock buffer management, the mathematical models of the artificial neural network has been further improved in the work through the use of pi-sigma, inverted pi and min-max neurons, which makes it possible to simulate the stages of fuzzy inference that determines the structure of the models.</p><p>The structure of the model of an artificial neuro-fuzzy network in the form of a graph is shown in Figure <ref type="figure" target="#fig_1">1</ref>. The input (zero) layer contains three neurons (corresponds to the number of input variables). The first hidden layer implements fuzzification and contains six neurons (corresponds to the number of values of linguistic input variables). The second hidden layer implements the aggregation of subconditions and contains five neurons (corresponds to the number of fuzzy rules). The third hidden layer implements the activation of conclusions and contains five neurons (corresponds to the number of fuzzy rules). The output (fourth) layer implements the aggregation of conclusions and contains three neurons (corresponding to the number of values of the linguistic output variable).</p><p>The functioning of an artificial neuro-fuzzy network is presented as follows.</p><p>In the first layer, the membership functions of the subconditions are calculated</p><formula xml:id="formula_12">                   2 1 1 ~2 1 exp ) ( 1 i i i i A m x x i   , 3 , 1  i , ) ( 1 ) ( 1 2 ~i A i A x x i i     , 3 , 1  i .</formula><p>In the second layer, the membership functions of conditions are calculated based on: </p><formula xml:id="formula_13"> pi-sigma neuron     3 1 2 1 ~) ( ) ( i j i A r ij A x w x ij r   , 8 , 1  r ;  min-max neuron   ) ( max min ) ( ~i A r ij j i A x w x ij r    , 3 , 1  i , 2 , 1  j , 8 , 1  r , 1 1 11  w , 0 1 12  w , 1 1 21  w , 0 1 22  w , 1 1 31  w , 0 1 32  w , 1 2 11  w , 0 2 12  w , 1<label>2</label></formula><formula xml:id="formula_14"> pi neuron ) ( ) ( ) , ( ~z x w z x r r r B A r C     , 3 , 1  z , 8 , 1  r ;  min neuron )} ( ), ( min{ ) , ( ~z x w z x r r r B A r C     , 3 , 1  z , 8 , 1  r ,</formula><p>where r r F w  . In the fourth layer, membership functions of the final conclusion are calculated based on:</p><formula xml:id="formula_15"> inverted pi neuron       8 1 ~)) , ( 1 ( 1 ) , ( r C z r C z z x w z x y r   , 3 , 1  z ;  max neuron   ) , ( max ) , ( ~z x w z x y r C z r r C z     , 3 , 1  z , 8 , 1  r , 0 1 1  w , 1 1 2  w , 0 1 3  w , 1 1 4  w , 0 1 5  w , 1 1 6  w , 0 1 7  w , 0 1 8  w , 0 2 1  w , 0 2 2  w , 1 2 3  w , 0 2 4  w , 1 2 5  w , 0 2 6  w , 1 2 7  w , 0 2 8  w , 1 3 1  w , 0 3 2  w , 0 3 3  w , 0 3 4  w , 0 3 5  w , 0 3 6  w , 0 3 7  w , 1 3 8  w .</formula><p>Thus, the mathematical model of an artificial neuro-fuzzy network based on pi-sigma and inverted pi neurons is presented as</p><formula xml:id="formula_16">                        8 1 31 2 1 ~) ( ) ( 1 1 ) , ( r B j i A r ij r z r C z z x w w w z x y r ij    , 3 , 1  z .<label>(1)</label></formula><p>Thus, the mathematical model of an artificial neuro-fuzzy network based on min-max neurons is presented as</p><formula xml:id="formula_17">                   ) ( , ) ( max min min max ) , ( 2 , 1 3 , 1 8 , 1 ~z x w w w z x y r ij B i A r ij j i r z r r C z    , 3 , 1  z . (<label>2</label></formula><formula xml:id="formula_18">)</formula><p>To make a decision on choosing an action that changes the size of the stock buffer, for models (1) -( <ref type="formula" target="#formula_13">2</ref>), the method of the maximum membership function is used</p><formula xml:id="formula_19">) , ( max arg max arg * z x y z C z z z    , 3 , 1  z .</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">The choice of criteria for evaluating the effectiveness of mathematical models of an artificial neuro-fuzzy network for dynamic stock buffer management</head><p>In this work, to assess the parametric identification of mathematical models of an artificial neurofuzzy network ( <ref type="formula" target="#formula_8">1</ref>) -( <ref type="formula" target="#formula_13">2</ref>), the following are selected:</p><p> accuracy criterion, which means the choice of such values of the parameters </p><formula xml:id="formula_20">    m m m </formula><p>, which deliver the minimum of the mean square error (the difference between the model output and the desired output) </p><formula xml:id="formula_21"> min ) ( 3 1 1 3 1 2       P p z pz pz d y P F ,<label>(3)</label></formula><formula xml:id="formula_22">    m m m </formula><p>, which provide the minimum probability of making a wrong decision (the difference between the model output and the desired output) </p><formula xml:id="formula_23"> min max arg max arg 1 1 3 , 1 3 , 1              P p pz z pz z d y P F ,                      pz z z z pz z z z pz z z z d y d y d y 3 , 1 3 , 1 3 , 1 3 , 1 3 , 1 3 ,</formula><formula xml:id="formula_24">    m m m </formula><p>, which provide a minimum of computational complexity</p><formula xml:id="formula_25"> min   T F . (<label>5</label></formula><formula xml:id="formula_26">)</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Identification of the parameters of the mathematical model of an artificial neuro-fuzzy network for dynamic stock buffer management based on the backpropagation algorithm in batch mode</head><p>For identification of the parameters of the mathematical model of an artificial neuro-fuzzy network for dynamic stock buffer management (1), the procedure for determining these parameters based on gradient descent has been further improved in the work by calculating only the vector of parameters  </p><formula xml:id="formula_27">                         8 1 31 2 1 ~) ( ) ( 1 1 ) , ( r B i j pi A r ij r z r p C pz z x w w w z x y r ij    , 3 , 1  z 4.</formula><formula xml:id="formula_28">i i i m E m m      , 3 , 1  i , ) ( 1 1 1 n E i i i        <label>, 3 , 1  i ,</label></formula><p>where  -parameter that determines the learning rate (with large  learning is faster, but the risk of getting an incorrect solution increases), 1 0   .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Checking the termination condition</head><formula xml:id="formula_29">If   E</formula><p>, then n=n+1, go to 3.</p><p>The value  is calculated experimentally.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Identification of the parameters of the mathematical model of an artificial neuro-fuzzy network for dynamic stock buffer management based on a genetic algorithm</head><p>For identification of the parameters of the mathematical model of an artificial neuro-fuzzy network for dynamic stock buffer management (2), the procedure for determining these parameters has been further improved in the work by using a combination of a genetic algorithm and simulated annealing to accelerate and improve the accuracy of parameter identification, which involves the following steps:</p><p> defining individuals of the initial population;  defining a fitness function;  selecting a reproduction (selection) operator;  selecting a crossing-over operator;  selecting the mutation operator;  selecting the reduction operator;  defining a stop condition. Real genes were selected for the following reasons:  the ability to search in large spaces, which is difficult to do in the case of binary genes, when an increase in the search space reduces the accuracy of the solution with a constant chromosome length;  the ability to customize solutions locally;</p><p> the absence of encoding / decoding operations, which are necessary for binary genes, increases the speed of the algorithm;  the proximity to the formulation of most applied problems (each real gene is responsible for one variable or parameter, which is impossible in the case of binary genes).</p><p>As the chromosome, which represents the i th individual of the population . Criterion (4) was chosen as a fitness function. To select vectors of parameters for crossing and mutation, the following effective combination is used as the reproduction operator</p><formula xml:id="formula_30">))) ( / 1 exp( 1 ( 1 | | 1 ) 2 2 ( | | 1 )) ( / 1 exp( | | 1 ) ( n g H i a a H n g H h P i                  , I i , 1  .</formula><p>Thus, in the early stages of the genetic algorithm, uniform selection is used explore the entire search space (random selection of chromosomes), and in the final linearly ordered selection is used to make the search directed (the current best chromosomes are preserved). This combination does not require scaling and can be used while minimizing the fitness function.</p><p>To combine two variants of the vector of parameters selected by the reproduction operator, uniform crossover is used as a crossover operator.</p><p>The choice of parents is carried out through the following effective combination -in the early stages of the genetic algorithm, outbreeding is used, which provides an exploration of the entire search space, and in the final stages, inbreeding is used, which makes the search directed. This combination does not require scaling and can be used while minimizing the fitness function.</p><p>Once the parents are selected, crossbreeding is carried out and two offspring are produced.</p><p>It is necessary to increase the variety of options for a global search for the optimal vector of parameters.</p><p>To provide a variety of variants of the vector of parameters after crossover, a heterogeneous mutation is used.</p><p>The mutation step is defined as . Thus, in the early stages of the genetic algorithm, a mutation with a large step occurs with a high probability, which ensures the exploration of the entire search space, and at the final stages, the probability of a mutation and its step tend to zero, which makes the search directed.</p><formula xml:id="formula_31">                           5 .</formula><p>The reduction operator allows forming a new population based on the previous population and vectors of parameters obtained by crossover and mutation. Scheme ) (</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>  </head><p>is used as a reduction operator, which does not require scaling and can be used to minimize the fitness function.</p><p>The paper proposes the following condition</p><formula xml:id="formula_32">   ) ( max 1 i i h F , I i , 1  .</formula><p>The value  is calculated experimentally.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Numerical study</head><p>A numerical study of the proposed mathematical models of an artificial neuro-fuzzy network and a conventional multilayer perceptron was carried out in the Matlab package using: Table <ref type="table" target="#tab_8">1</ref> shows the computational complexity, root mean square errors (RMS), the probabilities of making incorrect decisions on the dynamic stock buffer management, obtained based on the data set of the logistics company Ekol Ukraine using an artificial neural network of the multilayer perceptron (MLP) type with backpropagation (BP) and genetic algorithm (GA), and the proposed models ( <ref type="formula" target="#formula_8">1</ref>) and ( <ref type="formula" target="#formula_13">2</ref>) with back propagation (BP) and genetic algorithm (GA), respectively. Also, the MLP had 2 hidden layers (each consisted of 6 neurons, like the input layer). It was experimentally established that the parameter 0.05 =</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head></head><p>. Based on the experiments carried out, it can be argued that the procedure for identifying parameters based on the genetic algorithm is more effective than the training method based on backpropagation by reducing the probability of hitting a local extremum, automatic selection of the models structure and using the technology of parallel information processing.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8.">Conclusions</head><p>1. To solve the problem of increasing the efficiency of control of dynamic objects in natural language, the corresponding methods of artificial intelligence were investigated using the example of dynamic stock buffer management. These studies have shown that as of today the most effective is the use of artificial neural networks in combination with a fuzzy inference system and a genetic algorithm. 2. The novelty of the research lies in the fact that the proposed method of dynamic stock buffer management is based on fuzzy logic and linguistic constructions; provides a representation of knowledge about stock buffer management in the form of rules with linguistic constructions that are easily understandable by a person; reduces computational complexity, root mean square error and the probability of making an incorrect decision by automatically choosing the structure of the model, reducing the likelihood of hitting a local extremum and using the technology of parallel information processing for the genetic algorithm and backpropagation in batch mode. 3. As a result of the numerical study, it was found that the proposed method of neuro-fuzzy dynamic stock buffer management based on the linguistic constructions provides the probability of incorrect decisions on the dynamic stock buffer management of 0.02, and the root mean square error of 0.05. 4. Further research prospects are the use of the proposed method of neuro-fuzzy dynamic management of the stock buffer based on linguistic constructions for various intelligent control systems for dynamic objects in natural language.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>of values are fuzzy sets</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: The structure of the model of an artificial neuro-fuzzy network in the form of a graph</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>Calculation of ANN error energy based on criterion (3) the parameters of the membership function of the model subconditions (1) (backward move)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head></head><label></label><figDesc>minimum value of the j th gene, niteration number, Nmaximum number of iterations, r -random number obtained from a uniform distribution law, -a parameter that determines the learning rate (decrease in the annealing temperature) (for large  learning is faster, but the risk of getting an incorrect solution increases)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head></head><label></label><figDesc>Deep Learning Toolbox (to identify the parameters of the model of a conventional multilayer perceptron based on backpropagation),  Global Optimization Toolbox (to identify the parameters of the model of a conventional multilayer perceptron and the proposed model of an artificial neuro-fuzzy network (2) based on a genetic algorithm),  Fuzzy Logic Toolbox (to identify the parameters of the proposed model of an artificial neurofuzzy network (1) based on backpropagation).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_8"><head>Table 1</head><label>1</label><figDesc>Computational complexity, root mean square error, probability of making incorrect decision for dynamic stock buffer management According to Table1, the best results are obtained by model<ref type="bibr" target="#b1">(2)</ref> with the identification of parameters based on GA and with Gaussian membership function.</figDesc><table><row><cell>Parameter identification model and method</cell><cell>RMS</cell><cell>The probability of making an incorrect decision</cell><cell>Computational complexity</cell></row><row><cell>Regular MLP with BP in sequential</cell><cell>0.5</cell><cell>0.2</cell><cell>T=PN</cell></row><row><cell>mode</cell><cell></cell><cell></cell><cell></cell></row><row><cell>Regular MLP with GA without</cell><cell>0.4</cell><cell>0.15</cell><cell>T=PNI</cell></row><row><cell>parallelism</cell><cell></cell><cell></cell><cell></cell></row><row><cell>Author's model (1) with BP in</cell><cell>0.1</cell><cell>0.04</cell><cell>T=N</cell></row><row><cell>batch mode with Gaussian</cell><cell></cell><cell></cell><cell></cell></row><row><cell>membership function</cell><cell></cell><cell></cell><cell></cell></row><row><cell>Author's model (2) with GA with</cell><cell>0.05</cell><cell>0.02</cell><cell>T=N</cell></row><row><cell>parallelism with Gaussian</cell><cell></cell><cell></cell><cell></cell></row><row><cell>membership function</cell><cell></cell><cell></cell><cell></cell></row><row><cell>Author's model (1) with BP in</cell><cell>0.12</cell><cell>0.05</cell><cell>T=N</cell></row><row><cell>batch mode with bell-shaped</cell><cell></cell><cell></cell><cell></cell></row><row><cell>membership function</cell><cell></cell><cell></cell><cell></cell></row><row><cell>Author's model (2) with GA with</cell><cell>0.07</cell><cell>0.03</cell><cell>T=N</cell></row><row><cell>parallelism with bell-shaped</cell><cell></cell><cell></cell><cell></cell></row><row><cell>membership function</cell><cell></cell><cell></cell><cell></cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A neurolinguistic model of grammatical construction processing</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">F</forename><surname>Dominey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hoen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Inui</surname></persName>
		</author>
		<idno type="DOI">10.1162/jocn.2006.18.12.2088</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Cognitive Neuroscience</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="2088" to="2107" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Modeling a logical network of relations of semantic items in super phrasal unities</title>
		<author>
			<persName><forename type="first">N</forename><surname>Khairova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Sharonova</surname></persName>
		</author>
		<idno type="DOI">10.5555/2354416</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2011 9th East-West Design &amp; Test Symposium (EWDTS)</title>
				<meeting>the 2011 9th East-West Design &amp; Test Symposium (EWDTS)</meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="360" to="365" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Cox</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">G</forename><surname>Schleher</surname></persName>
		</author>
		<title level="m">Theory of Constraints Handbook</title>
				<meeting><address><addrLine>New York, NY, McGraw-Hill</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">My saga to improve production</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">M</forename><surname>Goldratt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Selected Readings in Constraints Management</title>
				<meeting><address><addrLine>Falls Church, VA</addrLine></address></meeting>
		<imprint>
			<publisher>APICS</publisher>
			<date type="published" when="1996">1996</date>
			<biblScope unit="page" from="43" to="48" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Production: The TOC Way (Revised Edition) including CD-ROM Simulator and Workbook</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">M</forename><surname>Goldratt</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2003">2003</date>
			<publisher>North River Press</publisher>
			<pubPlace>Great Barrington, MA</pubPlace>
		</imprint>
	</monogr>
	<note>Revised edition</note>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">N</forename><surname>Sivanandam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Sumathi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">N</forename><surname>Deepa</surname></persName>
		</author>
		<title level="m">Introduction to Neural Networks using Matlab 6</title>
				<meeting><address><addrLine>New Delhi</addrLine></address></meeting>
		<imprint>
			<publisher>The McGraw-Hill Comp., Inc</publisher>
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Neural networks and Learning Machines</title>
		<author>
			<persName><forename type="first">S</forename><surname>Haykin</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2009">2009</date>
			<publisher>Pearson Education, Inc</publisher>
			<pubPlace>Upper Saddle River, New Jersey</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Neural Networks and Statistical Learning</title>
		<author>
			<persName><forename type="first">K.-L</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">M S</forename><surname>Swamy</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2014">2014</date>
			<publisher>Springer-Verlag</publisher>
			<pubPlace>London</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Nechyporenko, Forecast method for natural language constructions based on a modified gated recursive block</title>
		<author>
			<persName><forename type="first">E</forename><surname>Fedorov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Utkina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">О</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proceedings</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">2604</biblScope>
			<biblScope unit="page" from="199" to="214" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A novel learning method for Elman neural network using local search</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Vairappan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Information Processing -Letters and Reviews</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="181" to="188" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Dey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">M</forename><surname>Salem</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1701.05923</idno>
		<ptr target="https://arxiv.org/ftp/arxiv/papers/1701/1701.05923.pdf" />
		<title level="m">Gate-Variants of Gated Recurrent Unit (GRU) Neural Networks</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Learning phrase representations using RNN encoder-decoder for statistical machine translation</title>
		<author>
			<persName><forename type="first">K</forename><surname>Cho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Van Merrienboer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gulcehre</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Bougares</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Schwenk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
		<idno type="DOI">10.3115/v1/D14-1179</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)</title>
				<meeting>the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)<address><addrLine>Doha, Qatar</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="1724" to="1734" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Special issue on echo state networks and liquid state machines</title>
		<author>
			<persName><forename type="first">H</forename><surname>Jaeger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Maass</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Prıncipe</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.neunet.2007.04.001</idno>
	</analytic>
	<monogr>
		<title level="j">Neural Networks</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="page" from="287" to="289" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Translating cuneiform symbols using artificial neural network</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">H S</forename><surname>Hamdany</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">R O</forename><surname>Al-Nima</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">H</forename><surname>Albak</surname></persName>
		</author>
		<idno type="DOI">10.12928/telkomnika.v19i2.16134</idno>
	</analytic>
	<monogr>
		<title level="j">TELKOMNIKA Telecommunication, Computing, Electronics and Control</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="438" to="443" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Fuzzy rule based innovation projects estimation</title>
		<author>
			<persName><forename type="first">A</forename><surname>Rotshtein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Shtovba</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Mostav</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings Joint 9th IFSA World Congress and 20th NAFIPS International Conference</title>
				<meeting>Joint 9th IFSA World Congress and 20th NAFIPS International Conference</meeting>
		<imprint>
			<date type="published" when="2001">2001</date>
			<biblScope unit="page" from="122" to="126" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Fuzzy logics associated with neural networks in the real time for better world</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">P</forename><surname>Reddya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Deepika</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">S</forename><surname>Prasad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">K</forename><surname>Kumar</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.matpr.2017.07.197</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the International Conference on Advancements in Aeromechanical Materials for Manufacturing (ICAAMM-2016)</title>
				<meeting>the International Conference on Advancements in Aeromechanical Materials for Manufacturing (ICAAMM-2016)<address><addrLine>Hyderabad, Telangana, India</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="8507" to="8516" />
		</imprint>
		<respStmt>
			<orgName>MLR Institute of Technology</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Recurrent fuzzy wavelet neural networks based on robust adaptive sliding mode control for industrial robot manipulators</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">T</forename><surname>Yen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">N</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">V</forename><surname>Cuong</surname></persName>
		</author>
		<idno type="DOI">10.1007/s00521-018-3520-3</idno>
	</analytic>
	<monogr>
		<title level="j">Neural Computing and Applications</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="page" from="6945" to="6958" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Medical disease analysis using neuro-fuzzy with feature extraction model for classification</title>
		<author>
			<persName><forename type="first">H</forename><surname>Das</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Naik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">S</forename><surname>Behera</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.imu.2019.100288</idno>
	</analytic>
	<monogr>
		<title level="j">Informatics in Medicine Unlocked</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="page">100288</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Parameter optimization via cuckoo optimization algorithm of fuzzy controller for liquid level control</title>
		<author>
			<persName><forename type="first">S</forename><surname>Balochian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Ebrahimi</surname></persName>
		</author>
		<idno type="DOI">10.1155/2013/982354</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Engineering</title>
		<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Diagnostic rule mining based on artificial immune system for a case of uneven distribution of classes in sample</title>
		<author>
			<persName><forename type="first">S</forename><surname>Subbotin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Oliinyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Levashenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Zaitseva</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Communications</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="3" to="11" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Optimization method based on the synthesis of clonal selection and annealing simulation algorithms</title>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">O</forename><surname>Grygor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">E</forename><surname>Fedorov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">G</forename><surname>Utkina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">S</forename><surname>Lukashenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Rudakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">M</forename><surname>Harder</surname></persName>
		</author>
		<author>
			<persName><surname>Lukashenko</surname></persName>
		</author>
		<idno type="DOI">10.15588/1607-3274-2019-2-10</idno>
	</analytic>
	<monogr>
		<title level="j">Radio Electronics, Computer Science, Control</title>
		<imprint>
			<biblScope unit="page" from="90" to="99" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Method for parametric identification of Gaussian mixture model based on clonal selection algorithm</title>
		<author>
			<persName><forename type="first">E</forename><surname>Fedorov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Lukashenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Utkina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lukashenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Rudakov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proceedings</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">2353</biblScope>
			<biblScope unit="page" from="41" to="55" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">P</forename><surname>Engelbrecht</surname></persName>
		</author>
		<title level="m">Computational Intelligence: an introduction</title>
				<meeting><address><addrLine>Chichester, West Sussex</addrLine></address></meeting>
		<imprint>
			<publisher>Wiley &amp; Sons</publisher>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<author>
			<persName><forename type="first">X.-S</forename><surname>Yang</surname></persName>
		</author>
		<title level="m">Nature-inspired Algorithms and Applied Optimization</title>
				<meeting><address><addrLine>Charm</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Nakib</surname></persName>
		</author>
		<author>
			<persName><forename type="first">-</forename><forename type="middle">G</forename><surname>El</surname></persName>
		</author>
		<author>
			<persName><surname>Talbi</surname></persName>
		</author>
		<title level="m">Metaheuristics for Medicine and Biology</title>
				<meeting><address><addrLine>Berlin</addrLine></address></meeting>
		<imprint>
			<publisher>Springer-Verlag</publisher>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">CMA-ES with restarts for solving CEC 2013 benchmark problems</title>
		<author>
			<persName><forename type="first">I</forename><surname>Loshchilov</surname></persName>
		</author>
		<idno type="DOI">10.1109/CEC.2013.6557593</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of IEEE congress on evolutionary computation</title>
				<meeting>IEEE congress on evolutionary computation<address><addrLine>CEC</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2013">2013. 2013</date>
			<biblScope unit="page" from="369" to="376" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">A Local Search Interface for Interactive Evolutionary Architectural Design</title>
		<author>
			<persName><forename type="first">J</forename><surname>Byrne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hemberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>O'neill</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Brabazon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of European Conference on Evolutionary and Biologically Inspired Music, Sound, Art and Design</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<meeting>European Conference on Evolutionary and Biologically Inspired Music, Sound, Art and Design<address><addrLine>Evo-MUSART</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2012">2012. 2012</date>
			<biblScope unit="volume">7247</biblScope>
			<biblScope unit="page" from="23" to="34" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
