<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Neural Network Modeling Method of Transformations Data of Audit Production with Returnable Waste</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Tatiana</forename><surname>Neskorodieva</surname></persName>
							<email>t.neskorodieva@donnu.edu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Vasyl&apos; Stus Donetsk National University</orgName>
								<address>
									<addrLine>600-richchia str., 21</addrLine>
									<postCode>21021</postCode>
									<settlement>Vinnytsia</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Eugene</forename><surname>Fedorov</surname></persName>
							<email>fedorovee75@ukr.net</email>
							<affiliation key="aff0">
								<orgName type="institution">Vasyl&apos; Stus Donetsk National University</orgName>
								<address>
									<addrLine>600-richchia str., 21</addrLine>
									<postCode>21021</postCode>
									<settlement>Vinnytsia</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">Cherkasy State Technological University</orgName>
								<address>
									<addrLine>Shevchenko blvd</addrLine>
									<postCode>460, 18006</postCode>
									<settlement>Cherkasy</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Oleksii</forename><surname>Smirnov</surname></persName>
							<email>dr.smirnovoa@gmail.com</email>
							<affiliation key="aff2">
								<orgName type="institution" key="instit1">Central Ukrainian National Technical University</orgName>
								<orgName type="institution" key="instit2">avenue University</orgName>
								<address>
									<postCode>25006</postCode>
									<settlement>Kropivnitskiy</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Pavlo</forename><surname>Rymar</surname></persName>
							<email>p.rymar@donnu.edu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Vasyl&apos; Stus Donetsk National University</orgName>
								<address>
									<addrLine>600-richchia str., 21</addrLine>
									<postCode>21021</postCode>
									<settlement>Vinnytsia</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Neural Network Modeling Method of Transformations Data of Audit Production with Returnable Waste</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">CF74F8653B7AE9AC2014A8FFE5C78859</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-19T16:26+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Currently, the analytical procedures used during the audit are based on data mining techniques. The object of the research is the process of the content auditing of the production with returnable waste and intermediate products. The aim of the work is to reduce the risk of incorrect display of the dataset in the DSS of the audit of the method of neural network modeling of transformations of audit data of production with recyclable waste and intermediates. This will reduce the risk of the validated data misclassification. Audit data set transformations of a prerequisite "Completeness" are presented the sequences of sets data mappings of consecutive operations. Reached further development a method of parametrical identification of the MRMLP model which considers number of iterations of training and combines Gaussian distributions and Cauchy that increases the forecast accuracy as on initial iterations all search space is investigated, and on final iterations the search becomes directed. The software implementing the offered methods in MATLAB package was developed and investigated on the data of the release of raw materials into production and the posting of finished products of a with a two-year depth of sampling with daily time intervals. The made experiments confirmed operability of the developed software and allow to recommend it for use in practice in a subsystem of the automated analysis of DSS of audit for check of maps of sets of data of the raw materials release into production and the products output.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Keywords1</head><p>production audit, returnable waste, intermediate products, mapping by neural network, modified recurrent multilayered perceptron, metaheuristics, DSS, risk of wrong mapping of data sets, risk of the validated data misclassification.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>In the process of development of international and national economics, industry of IT, it is possible to distinguish the following basic tendencies: digital transformations realization, digital economy forming, socio-economic processes globalization and IT accompanying them <ref type="bibr" target="#b0">[1]</ref>.</p><p>These processes result in the origin of global, multilevel hierarchical structures of heterogeneous, multivariable, multifunction connections, interactions and cooperation of manage objects (objects of audit). Large volumes of information about them have been accumulated in the information systems of account, management and audit.</p><p>Consequently, nowadays the scientific and technical issue of the modern information technologies in financial and economic sphere of Ukraine is forming of the methodology of planning and creation of the decision support systems (DSS) at the audit of enterprises in the conditions of application of IT and with the use of information technologies on the basis of the automated analysis of the large volumes of data about financial and economic activity and states of enterprises with the multi-level hierarchical structure of heterogeneous, multivariable, multifunction connections, intercommunications and cooperation of objects of audit with the purpose of expansion of functional possibilities, increase of efficiency and universality of IT audit <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3]</ref>.</p><p>Currently, analytical procedures used during the audit are based on data mining techniques <ref type="bibr" target="#b3">[4]</ref><ref type="bibr" target="#b4">[5]</ref><ref type="bibr" target="#b5">[6]</ref>. Automated DSS audit means the automatic forming of recommendable decisions, based on the results of the data automated analysis, that improves quality process of audit and reducing the risk of incorrect display of datasets <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b8">8]</ref>. Unlike the traditional approach, computer technologies of analysis of data in the audit system accelerate and promote the process accuracy of audit, that extremely critical in the conditions of plenty of associate tasks on lower and middle levels and amounts of indexes and supervisions in every task <ref type="bibr" target="#b9">[9]</ref>.</p><p>When developing a decision-making system in audit based on data mining technologies, three methods have been created: classifying variables, forming analysis sets, mapping analysis sets.</p><p>The peculiarity of the methodology for classifying indicators is that qualitatively different (by semantic content) variables are classified: numerological, linguistic, quantitative, logical. The essence of the second technique is determined by the qualitative meaning of the indicators. In accordance with this, sets are formed with the corresponding semantic content: document numbers, name of indicators, quantitative estimates of the values of indicators, logical indicators.</p><p>The third technique is subordinated to the mappings of formed sets of the same type on each other to determine equivalence in the following senses: numerological, linguistic, quantitative, logical.</p><p>For modeling of data transformations of audit of production neural networks.</p><p>The following are most often used as neural networks for mapping audit indicators:</p><p>-Elman's (ENN) neural network or a simple recurrent network (SRN) <ref type="bibr" target="#b10">[10,</ref><ref type="bibr" target="#b11">11]</ref> which is a recurrent two-layer network and is constructed based on MLP. The advantage of this network is simpler architecture and higher speed of training, than in gated, reservoir and bidirectional networks. A disadvantage is the insufficient accuracy of the forecast in comparison with gated, reservoir and bidirectional networks;</p><p>-the bidirectional recurrent neural network (BRNN) <ref type="bibr" target="#b12">[12,</ref><ref type="bibr" target="#b13">13]</ref> which is a recurrent two-layer network and is constructed based on two neural networks of Elman. The advantage of this network is higher forecast accuracy, than in a normal neural network of Elman. A disadvantage is higher complexity of determination of architecture, lower speed of training, than in a normal neural network of Elman;</p><p>-long short-term memory (LSTM) <ref type="bibr" target="#b14">[14,</ref><ref type="bibr" target="#b15">15]</ref> which is a recurrent network and is constructed based on memory units (contain one or more cells), and input, output, a forget of gates (FIRs filters). The advantage of this network is higher forecast accuracy, than in a normal neural network of Elman. A disadvantage is higher complexity of architecture determination, lower training speed, than in a normal neural network of Elman; -the bidirectional long short-term memory (BLSTM) <ref type="bibr" target="#b16">[16,</ref><ref type="bibr" target="#b17">17]</ref> which is a recurrent network and is constructed based on two neural networks of LSTM. The advantage of this network is higher forecast accuracy, than in normal LSTM. A disadvantage is higher complexity of architecture determination, lower speed of training, than in normal LSTM;</p><p>-the gated recurrent unit (GRU) <ref type="bibr" target="#b18">[18,</ref><ref type="bibr" target="#b19">19]</ref> which is a recurrent two-layer network and is constructed based on the hidden unit's gates of reset and update (FIRs filters). The advantage of this network is higher accuracy of the forecast, than in a normal neural network of Elman. A disadvantage is higher complexity of architecture determination, lower training speed, than in a normal neural network of Elman;</p><p>-the echo state network (ESN) <ref type="bibr" target="#b20">[20]</ref> which is a recurrent two-layer network is constructed based on the reservoir (represents a layer of the interconnected not full-connected neurons). The advantage of this network is higher forecast accuracy, than in a normal neural network of Elman. A disadvantage is higher complexity of architecture determination, lower training speed, than in a normal neural network of Elman;</p><p>-the liquid state machine (LSM) <ref type="bibr" target="#b21">[21]</ref> which is a recurrent two-layer network is constructed based on the reservoir (represents a layer of the interconnected not full-connected spike neurons) and MLP. The advantage of this network is higher forecast accuracy, than in a normal neural network of Elman. A disadvantage is higher complexity of architecture determination, lower training speed, than in a normal neural network of Elman. Thus, any of networks does not meet all criteria. For acceleration of training and increase an accuracy of data transformations model of production audit now are used metaheuristics (or modern heuristics) <ref type="bibr" target="#b22">[22]</ref>. The metaheuristics expands opportunities heuristic, combining heuristic methods based on high-level strategy <ref type="bibr" target="#b23">[23]</ref>.</p><p>Existing metaheuristics possess one or more of the following disadvantages:</p><p>-there is only description abstraction of a method or the method description is focused on the solution only of a certain task <ref type="bibr" target="#b24">[24]</ref>;</p><p>-influence of iteration number on solution search process the is not considered <ref type="bibr" target="#b25">[25]</ref>; -the convergence of a method is not guaranteed <ref type="bibr" target="#b26">[26]</ref>; -there is no opportunity to use not binary potential solutions <ref type="bibr" target="#b27">[27]</ref>; -the procedure of parameters values determination is not automated <ref type="bibr" target="#b28">[28]</ref>; -there is no opportunity to solve problems of conditional optimization <ref type="bibr" target="#b29">[29]</ref>; -insufficient accuracy of a method <ref type="bibr" target="#b30">[30]</ref>.</p><p>In this regard there is a creation problem of effective metaheuristic methods of optimization.</p><p>In this regard, it is the actual to create a neural network that considers the functional structure of production with returnable and non-returnable waste and intermediate products and learns based on effective metaheuristics.</p><p>The aim of the work is to reduce the risk of incorrect display of the dataset in the audit DSS by the method of neural network modeling of audit data transformations of production with recyclable waste and intermediates.</p><p>For the objective achievement it is necessary to solve the following tasks:</p><p>-offer structural model of audit data transformations of production; -offer neural network model of audit data transformations of production based on a recurrent multilayered perceptron;</p><p>-select criterion for evaluation of neural network model efficiency of production audit data transformation;</p><p>-offer a method of parametrical identification of neural network model of production audit data transformation based on the return distribution in time;</p><p>-offer a method of parametrical identification of neural network model of audit data transformation of production based on cross entropy and stochastic search of an extremum with training at vectors of normal distribution;</p><p>-execute numerical research.</p><p>The problem formulation. Let for model of data transformations of production audit the training set be set , where x is an input signal, w is the parameters vector, is represented as the problem of finding such a model parameter vector * w that satisfies the criterion</p><formula xml:id="formula_0">* (1) ( ) 2 1 1 ( (x , w ) (d ,d ,...,d )) min P H F g P µ µ µ µ µ= = − → ∑    . (<label>1</label></formula><formula xml:id="formula_1">)</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Materials and methods</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Formalization of audit subject area data subelements transformations of the prerequisite "Completeness"</head><p>Audit data set transformations of a prerequisite "Completeness" will be presented the sequences of sets data mappings of consecutive operations</p><formula xml:id="formula_2">1 2 m M i i i i Z Z Z Z → → → →  , 1 m M i i i       , ,<label>, 1 ( ) (I</label></formula><p>)</p><formula xml:id="formula_3">m M i i i A ∈    , I 1, I =  ,<label>(2)</label></formula><p>where Z is reporting data set, , ,</p><formula xml:id="formula_4">M i i i  <label>1 ( ) m</label></formula><p>is combination of consecutive operation types of a set I 1, I =  , (I) A  is set of possible combinations on a set I 1, I =  . Therefore "Completeness" prerequisite audit we will present as the transformations checking of subelements of data domain in the form of the sequences of mapping of splitting's data elements of the sequences</p><formula xml:id="formula_5">1 2 ( ) ( ) ( ) ( ) m M i i i i Z Z Z Z ℜ → ℜ → ℜ → → ℜ  , 1 m M i i i       , ,<label>, 1 ( ) (I</label></formula><p>)</p><formula xml:id="formula_6">m M i i i A ∈    , I 1, I =  ,<label>(3) where</label></formula><formula xml:id="formula_7">( ) Z ℜ is splitting set Z . Possible combinations set of consecutive operation types , ,<label>1</label></formula><formula xml:id="formula_8">( ) m M i i i   defined in (3)</formula><p>includes check in direct and in the opposite direction.</p><p>The model of the subelements transformation of the "Completeness" prerequisite audit subject domain will be formed on the example of the direct material costs audit. Models of their conversions can be presented in the form of graphs in which everyone corresponds to a subelement, and an edge -to map which describes interrelation between the corresponding subelements.</p><p>For this purpose, we use formalization of a set of direct material costs in the form of the graph (1)   (1)</p><p>(</p><p>( , <ref type="figure" target="#fig_2">1</ref>) where vertex -accounts on which account of these current assets is kept and edges are operations as a result of which there is their conversion. Then model of subelements conversion of audit data domain of a prerequisite "Completeness" at direct full check ( , , , , ,</p><formula xml:id="formula_10">) G Z R = (fig.</formula><p>) (1 2 3 4)</p><formula xml:id="formula_12">m M i i i =  </formula><p>) represents maps of subsets of these raw materials receipt </p><formula xml:id="formula_13">r i Z i Z ℜ ∈ℜ , ,<label>1, , 1, ,</label></formula><p>,</p><formula xml:id="formula_14">m T t T j J m M T j m m   ∈ = =     .</formula><p>In that specific case, if splitting sets it was carried out on the basis of the logical conditions characterizing belonging to one of accounting item subspecies, then the model of subelements conversion of audit data domain of a prerequisite "Completeness" at direct full check is the set of the sequences of sets maps of these calculations operations for suppliers types in subsets of these operations on raw materials types, then in subsets of these operations on products types and finished goods types. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Choosing a neural network model for mapping audit sets</head><p>The unit diagram of model of the modified recurrent multilayered perceptron (MRMLP) fullconnected recurrent layers of semi-products production (the neurons forming them are designated in continuous white color), not full-connected non-recurrent layers of unreturnable release of raw materials in production raw materials receipt   and unreturnable waste (1)  y</p><formula xml:id="formula_15">) Z i ℜ<label>(1) 1 (</label></formula><formula xml:id="formula_16">) Z i ℜ (1) 3 ( ) Z i ℜ (1) 4 ( ) Z i ℜ write-off of production prime cost of finished goods receipt of finished goods … … (1) 2 ( ) Z i ℜ  (1) 3 ( ) Z i ℜ  … … write-off of production prime cost of finished goods receipt of finished goods 1 ( ) 1 ( ) R Z i ℜ 2 ( ) 2 ( ) R Z i ℜ 3 ( ) 3 ( ) R Z i ℜ 4 ( ) 4 ( ) R Z i ℜ raw materials receipt release of raw materials in production 3 ( ) 3 ( ) R Z i ℜ  3 ( ) 2 ( ) R Z i ℜ  waste (<label>(1) 2 (</label></formula><formula xml:id="formula_17"> = (1)<label>(1) (1) 1</label></formula><formula xml:id="formula_18">( ,..., ) N y y   , …<label>, ( ) y H  = ( ) ( ) ( ) 1 ( ,...,</label></formula><p>)</p><formula xml:id="formula_19">H H H N y y   , it is presented in the form (0) ( ) i i y n x = ,<label>(0) 1</label></formula><formula xml:id="formula_20">, i N ∈ , ( ) ( ) ( ) ( ) ( ( )) k k k j j y n f s n = , ( ) 1, k j N ∈ , 1, k H ∈ ,<label>( 1) ( )</label></formula><formula xml:id="formula_21">( ) ( ) ( ) ( 1) ( ) ( ) 1 1 ( ) ( )<label>( 1)</label></formula><formula xml:id="formula_22">k k N N k k k k k k j j ij i ij i i i s n b w y n w y n − − = = = + + − ∑ ∑  , ( ) ( ( )) j j y n f s n =    , ( ) 1, H j N ∈ , ( ) 0 ( ) ( ) ( ) H j jj i s n b n w y n = +    , ( ) ( ) ( ) ( ) ( ( )) k k k j j y n f s n =    , ( ) 1, k j N ∈ , 1, k H ∈ , ( ) ( ) ( ) ( ) 0 ( ) ( ) ( ) k k k k j jj j s n b n w y n = +    ,</formula><p>where ( ) k N -neurons number in k -th layer of semi-products production and unreturnable waste, H -quantity of layers of semi-products production and unreturnable waste,   </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Criterion choice for evaluation of neural network model efficiency of data transformations of production audit</head><formula xml:id="formula_23">P H P k k H k k F PN HP N µ µ µ µ µ = = µ = = − + − → ∑ ∑ ∑     , where y µ  ,<label>(1) ( ) y ,</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4.">Method of parametrical identification of data transformations model of production audit based on the back propagation in time in a sequential mode</head><p>1. Number of iterations of training 1 n = , initialization by means of uniform distribution on an interval (0.1) or [-0.5, 0.5] bias ( ) ( )</p><formula xml:id="formula_24">k j b n , ( ) ( ) k j b n  , ( ) 1, , 1, k j N k H ∈ ∈ , ( ) j b n  , ( ) 1, H j N ∈</formula><p>, and weights ( ) ( ) </p><formula xml:id="formula_25">k ij w n ,<label>( 1) ( ) 1, , 1, , 1,</label></formula><formula xml:id="formula_26">k k i N j N k H − ∈ ∈ ∈ , ( ) ( ) k ij w n  , ( )<label>( 1) 1, , 1, , 1,</label></formula><formula xml:id="formula_27">k k i N j N k H − ∈ ∈ ∈ , ) ( ~n w jj , ( ) 1, H j N ∈ , ( ) ( ) k jj w n  , ( ) 1, , 1, k j N k H ∈ ∈ . 2. The training set is set (1) ( ) ( ) (0) ( ) ( ) {(x ,d ,d ,...,d ) | x ,d ,d } H N N k N H k R R R µ µ µ µ µ µ µ ∈ ∈ ∈      ,</formula><formula xml:id="formula_28">k i y n− = , ( ) 1, k i N ∈ , 1, k H ∈ .</formula><p>4. Calculation of a signal output for each full-connected recurrent layer of semi-products production considering returnable waste (forward propagation) (0) ( )</p><formula xml:id="formula_29">i i y n x µ = , (<label>) ( ) ( ) ( ) ( ( )</label></formula><formula xml:id="formula_30">) k k k j j y n f s n = , ( ) 1, k j N ∈ , 1, k H ∈ ,<label>( 1) ( ) ( ) ( ) ( ) ( 1) ( ) ( ) 1 1 (</label></formula><formula xml:id="formula_31">) ( ) ( ) ( ) ( )<label>( 1)</label></formula><formula xml:id="formula_32">k k N N k k k k k k j j ij i ij i i i s n b n w n y n w n y n − − = = = + + − ∑ ∑  .</formula><p>5. Calculation of a signal output for not full-connected non-recurrent layer of finished goods (forward propagation) ( ) ( ( ))</p><formula xml:id="formula_33">j j y n f s n =    , ( ) 1, H j N ∈ , ( ) ( ) ( ) ( ) ( ) H j j jj i s n b n w n y n = +    . 6</formula><p>. Calculation of a signal output for each not full-connected non-recurrent layer of unretainable waste (forward propagation)</p><formula xml:id="formula_34">( ) ( ) ( ) ( ) ( ( )) k k k j j y n f s n =    , ( ) 1, k j N ∈ , 1, k H ∈ , ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) k k k k j j jj j s n b n w n y n = +    .</formula><p>7. Calculation of energy of error ANN ( ) ( )</p><formula xml:id="formula_35">( ) ( ) 2 2 ( ) 1 1 1 1 1 ( ) ( ) ( ) 2 2 H k N H N k j j j k j</formula><p>E n e n e n</p><formula xml:id="formula_36">= = = = + ∑ ∑ ∑   , (<label>)</label></formula><formula xml:id="formula_37">( ) j j j e n y n d µ = −    , ( ) ( ) ( ) ( ) ( ) k k k j j j e n y n d µ = −    .</formula><p>8. Setup of synoptic weights based on generalized the delta rule (backward propagation)</p><formula xml:id="formula_38">( ) ( ) ( ) ( ) ( 1) ( ) ( ) k k k j j j E n b n b n b n ∂ + = − η ∂ , ( ) 1, k j N ∈ , 1, k H ∈ , ( ) ( ) ( ) ( ) ( 1) ( ) ( ) k k k ij ij ij E n w n w n w n ∂ + = − η ∂ , ( 1) 1, k i N − ∈ , ( ) 1, k j N ∈ , 1, k H ∈ , ( ) ( ) ( ) ( ) ( 1) ( ) ( ) k k k ij ij ij E n w n w n w n ∂ + = − η ∂    , ( ) 1, k j N ∈ , ( 1) 1, k i N − ∈ , 1, k H ∈ , ( ) ( 1) ( ) ( ) j j j E n b n b n b n ∂ + = − η ∂    , ( ) 1, H j N ∈ , ( ) ( 1) ( ) ( ) jj jj jj E n w n w n w n ∂ + = − η ∂    , ( ) 1, H j N ∈ , ( ) ( ) ( ) ( ) ( 1) ( ) ( ) k k k j j j E n b n b n b n ∂ + = − η ∂    , ( ) 1, k j N ∈ , 1, k H ∈ , ( ) ( ) ( ) ( ) ( 1) ( ) ( ) k k k jj jj jj E n w n w n w n ∂ + = − η ∂    , ( ) 1, k j N ∈ , 1, k H ∈ ,</formula><p>where η is the parameter determining training speed (at big η training happens quicker, but the danger to receive the incorrect solution increases), 0 1 </p><formula xml:id="formula_39">&lt; η &lt; . ( ) ( ) ( ) ( ) ( ) k k j j E n g n b n ∂ = ∂ , ( 1) ( ) ( ) ( ) ( ) ( ) ( ) k k k i j ij E n y n g n w n − ∂ = ∂ , ( ) ( ) ( ) ( ) ( 1) ( ) ( ) k k k i j ij E n y n g n w n ∂ = − ∂  , ( ) ( ) ( ) j j E n g n b n ∂ = ∂   , ( ) ( ) ( ) ( ) ( ) H j j jj E n y n g n w n ∂ = ∂   , ( ) ( ) ( ) ( ) ( ) k k j j E n g n b n ∂ = ∂   , ( ) ( ) ( ) ( ) ( ) ( ) ( ) k k k j j jj E n y n g n w n ∂ = ∂   , k H H H H j jj j jj j k N j k k k k k k j jl l jj j l f s n w n g n w n g n k H g n f s n w n g n w n g n k H + + + =  ′ + =   =    ′ + &lt;          ∑       , ( ) ( ( )) ( ) j j j g n f s n e n ′ =     , (<label>) ( ) ( ) ( ) ( ) ( ( )) ( )</label></formula><formula xml:id="formula_40">k k k k l j j g n f s n e n ′ =     . 9.</formula><formula xml:id="formula_41">= − + &lt; ε ∑ to be completed.</formula><p>A high probability of hit in a local extremum belongs to disadvantage of this method that reduces training accuracy, and impossibility of training in mode that reduces training speed. In this regard in work the alternative method of training at a basis metaheuristic is offered.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.5.">Method of parametrical identification of data transformations model of production audit on a basis metaheuristic</head><p>The offered method of parametrical identification of production audit data transformations model is based on a method of cross entropy and stochastic search of an extremum with training at vectors of normal distribution <ref type="bibr" target="#b30">[30]</ref>. Feature of the offered method will be that for speed control of convergence of a method, speed control of change of distribution parameters and for providing that on initial iterations all search space was investigated, and on final iterations the search became directed, at generation of potential solutions number of iterations is considered. Besides, not only Gaussian distribution, but also Cauchy's distribution is used, and their value depends on number of iterations.</p><p>The offered method consists of the following stages:</p><p>1. Initialization 1.1. Task of the maximum number of iterations N , population size K , solution lengths M (corresponds to length of a vector of the model parameters MRMLP), the maximum quantity of the selected best solutions B , parameter for generation of scales parameters vector β , 0 1 &lt; β &lt; . 1.2. Initialization of a location's parameters vector </p><formula xml:id="formula_42">loc scale kj j j N n n x С N N N   −     = γ + γ +             , 1, j M ∈ ,</formula><p>where (0,1) N is standard normal distribution, (0,1) </p><formula xml:id="formula_43">C is standard Cauchy distribution. 3.3. If k K &lt; then { } k P P x =  , 1 k k = + ,</formula><formula xml:id="formula_44">k k k F x = , 1, k K ∈ .</formula><p>6. Define the best solution (best vector of the model parameters MRMLP) on all iterations if * * ( ) ( )</p><formula xml:id="formula_45">k F x F x &lt; then * * k x x = .</formula><p>7. Modification of distribution parameters (on a basis B the first, i.e. best, new potential solutions from population P ).</p><p>7.1. Modification of a vector of parameters of locations</p><formula xml:id="formula_46">loc loc loc j j j n N n N N −     γ = γ + γ          , 1 1 B loc j kj k x B = γ = ∑  , 1, j M ∈ .</formula><p>7.2. Modification of a vector of parameters of scales max min ( )</p><formula xml:id="formula_47">scale j j j N n x x N −   γ = β −     , 1, j M ∈ .</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">If n N</head><p>&lt; then 1 n n = + , go to a step 3. Result is * x .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.6.">Algorithm for parametric identification of the production audit data transformation model based on metaheuristics</head><p>For the proposed parametric identification method of the model for transforming production audit data based on metaheuristics, an algorithm has been developed that is designed to be implemented on a GPU using the CUDA information parallel processing technology and is shown in Fig. <ref type="figure">3</ref>. This block diagram functions as follows.</p><p>Step 1. Operator input of the maximum number of iterations N , population size K , solution length M , maximum number of selected best solutions B , parameter to generate a vector of scale parameters β , 0 1 &lt; β &lt; , minimum and maximum values for the solution min max ,</p><formula xml:id="formula_48">j j x x , 1, j M ∈ .</formula><p>Step 2. Initialization of location parameters vector  Step 5. Setting the iteration number 1 n = .</p><p>Step 6. Building a current population of potential solutions P , those. generating each potential k -th solution using K M ⋅ GPU threads, which are grouped into K blocks. Each thread computes (0,1) (0,1)</p><formula xml:id="formula_49">loc scale kj j j N n n x С N N N   −     = γ + γ +             .</formula><p>Step 7. Sorting potential solutions by goal function, i.e.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">( ) ( )</head><formula xml:id="formula_50">k k F x F x + &lt;</formula><p>, based on parallel odd-even sort, using K GPU threads, which are grouped into 1 block.</p><p>Step 8. Determination based on parallel reduction of the best solution for the current population using K GPU threads, which are grouped into 1 block. Each thread computes a target function ( )</p><formula xml:id="formula_51">k F x * 1, arg min ( ) k k K k F x ∈ = .</formula><p>Step 9. Determining the best solution across all iterations if</p><formula xml:id="formula_52">* * ( ) ( ) k F x F x &lt; , then * * k x x = .</formula><p>Step 10. Calculation of each j -th parameter of the average location loc j γ  based on parallel reduction using MB GPU threads, which are grouped into M blocks.</p><p>Step 11. Modification of each j -th location parameter using M GPU threads, which are grouped into 1 block. Each thread computes</p><formula xml:id="formula_53">loc loc loc j j j n N n N N −     γ = γ + γ          .</formula><p>Step 12. Modification of each j -th scale parameter, using M GPU threads, which are grouped into 1 block. Each thread computes max min ( )</p><formula xml:id="formula_54">scale j j j N n x x N −   γ = β −     .</formula><p>Step 13. Termination condition. If n N &lt; , then 1 n n = + and go to step 6. Step 14. Recording the best solutions for all iterations in the database.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Numerical research</head><p>The numerical research of the offered method of parametrical identification was conducted with use of technology of parallel processing of information of CUDA in a MATLAB package, at the same time the amount of threads in the unit corresponded to the population size, sorting of population was carried out on the basis of an algorithm of pair and unpaired sorting, search of the best solution on the current population * k x finding of an average vector of a location loc γ  it was executed on the basis of an algorithm of a parallel reduction.</p><p>In this work population size</p><formula xml:id="formula_55">3 K M =</formula><p>, maximum number of iterations 100 N =</p><p>, parameter for generation of a vector of parameters of scales β =0.1, the maximum quantity of the selected best solutions B = 0.1K.</p><p>The results of the qualitative characteristics of the parametric identification methods of the proposed MRMLP neural network model, comparison in Table <ref type="table" target="#tab_5">1</ref>, where Q is the number of parameters for the MRMLP neural network, P is the power of the training set. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Discussion</head><p>Backpropagation method in sequential mode:</p><p>-cannot be used in batch training mode, i.e. it is impossible to parallelize computations on a GPU, which reduces the speed of finding a solution (Table <ref type="table" target="#tab_5">1</ref>); -high probability of hitting a local extremum, which reduces the accuracy of finding a solution (Table <ref type="table" target="#tab_5">1</ref>).</p><p>The proposed metaheuristic method eliminates the indicated disadvantages.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>The article discusses the problem of reducing the risk of incorrect display of data sets in the audit DSS based on the method of neural network modeling of transformations of production audit data with recyclable waste and intermediates due to a modified recurrent multilayer perceptron (MRMLP). Reached further development a method of parametrical identification of the MRMLP model which considers number of iterations of training and combines Gaussian distributions and Cauchy that increases the forecast accuracy as on initial iterations all search space is investigated, and on final iterations the search becomes directed. The software implementing the offered methods in MATLAB package was developed and investigated on of the release of raw materials into production and the posting of finished products of a manufacturing enterprise with a two-year depth of sampling with daily time intervals. The made experiments confirmed operability of the developed software and allow to recommend it for use in practice in a subsystem of the automated analysis of DSS of audit for check of maps of sets of data of the raw materials release into production and the products output. Prospects of further research are in checking the offered methods on broader set of test databases.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>x µ is the µ -th training input vector, d µ  is the µ -th training output vector of finished goods, ( ) d k µ  is the µ -th training output vector of unreturnable waste which are received after each k -th of a layer of production of semi-products. Then a problem of increase an accuracy of production audit data transformations on model of the modified recurrent multilayered perceptron (MRLMP) (x, w) g</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Model of conversions of audit data domain subelements of a prerequisite "Completeness"</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: The unit diagram of the modified recurrent multilayered perceptron model The MRMLP model, the executing map of each input sample of raw materials x =</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>N</head><label></label><figDesc>-number of neurons of an input layer (raw materials layer), ( ) k j b -bias for j -th of a neuron k -th of a layer of semi-products production,j b -bias for j -th of a neuron of a finished goods layer, j -th of a neuron k -th of a the unreturnable remains layer,( ) k ij w-communication weight from i-th of a neuron of k -th 1-th layer of semi-products production to j-th to a neuron of k -th of a layer of semi-products production, ( ) k ij w  -communication weight from i-th of a neuron k -th of the semi-products production layer to j -th neuron of k -th 1-th layer of semi-products production, jj w  -communication weight from j -th neuron H-th of a layer of semi-products production to j -th neuron of a finished goods layer, from j -th of a neuron k-th of a layer of semi-products production to j-th neuron of k -th layer of irretrievable waste, ( ) ( )k jy n -output of j -th neuron of k -th of a layer of semi-products production in timepoint n , j -th finished goods layer in timepoint n , ( ) ( ) k j y n  -output of j -th neuron of k -th of a irretrievable waste layer in timepoint n , ( ) k f -neurons function neuron activation of k -th of a layer of semi-products production, f  -neurons function activation of a finished goods layer, ( ) k f  -neurons function activation of k -th of a unreturnable waste layer.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>1 ( 4 . 3 .</head><label>143</label><figDesc>Define the best solution (best vector of the model parameters MRMLP) Creation of the current population of potential solutions P . 3.1. Solution number 1 k = , P = ∅ . 3.2. Generation of the new potential solution k x (vector of the model parameters MRMLP)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>5 .</head><label>5</label><figDesc>transition to a step 3.2. 4. Sort P function of the purpose, i.e. Define the best solution (best vector of the model parameters MRMLP) on the current population * arg min ( )</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>1 ( 3 .</head><label>13</label><figDesc>Initialization of the vector of scale p arameters 4. Determining the best solution 5. Setting the iteration number n=1 6. Building a current p opulation of p otential solutions 7. Sorting p otential solutions 8. Determining the best solution for the current p opulation 9. Determining the best solution across all iterations 10. Calculation of the p arameters of the average location 11. Modification of location p arameters 12. Modification of scale p arameters 14. Writing the obtained best solution to the database</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 3 : 1 ( 4 .</head><label>314</label><figDesc>Figure 3: Block diagram of an algorithm for parametric identification of a model for transforming production audit data based on metaheuristics</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>forming them neurons is presented on fig.2are designated in continuous black color) and not full-connected non-recurrent layer of finished goods (the neurons forming it are designated in continuous gray color).</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_5"><head>Table 1</head><label>1</label><figDesc>Comparison of the qualitative characteristics of parametric identification methods of the proposed neural network model MRMLP</figDesc><table><row><cell cols="2">Parametric identification methods</cell><cell>coefficient of determination</cell><cell>computational complexity</cell></row><row><cell>Backpropagation sequential learning mode method</cell><cell>in</cell><cell>0.80</cell><cell>~NPQ (forward / backward)</cell></row><row><cell cols="2">Proposed Metaheuristic Method Using GPU</cell><cell>0.95</cell><cell>~NK</cell></row></table></figure>
		</body>
		<back>
			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0" />			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<ptr target="https://www.worldbank.org/en/publication/wdr2016" />
		<title level="m">World Development Report 2016, Digital Dividends</title>
				<imprint>
			<publisher>The World Bank</publisher>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Information technology separating hyperplanes synthesis for linear classifiers</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">V</forename><surname>Barmak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">V</forename><surname>Krak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">A</forename><surname>Manziuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">S</forename><surname>Kasianiuk</surname></persName>
		</author>
		<idno type="DOI">10.1615/JAutomatInfScien.v51.i5.50</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Automation and Information Sciences</title>
		<imprint>
			<biblScope unit="volume">51</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="54" to="64" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Development of the comprehensive method to manage risks in projects related to information technologies</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">V</forename><surname>Prokopenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Grigor</surname></persName>
		</author>
		<idno type="DOI">10.15587/1729-4061.2018.128140</idno>
	</analytic>
	<monogr>
		<title level="j">Eastern-European Journal of Enterprise Technologies</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="37" to="43" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Autoencoder Neural Networks versus External Auditors, Detecting Unusual Journal Entries in Financial Statement Audits</title>
		<author>
			<persName><forename type="first">M</forename><surname>Schultz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tropmann-Frick</surname></persName>
		</author>
		<idno type="DOI">10.24251/hicss.2020.666</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 53rd Hawaii International Conference on System Sciences (HICSS-2020)</title>
				<meeting>the 53rd Hawaii International Conference on System Sciences (HICSS-2020)<address><addrLine>Maui, Hawaii, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="5421" to="5430" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Using Autoencoders for Data-Driven Analysis in Internal Auditing</title>
		<author>
			<persName><forename type="first">J</forename><surname>Nonnenmacher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Kruse</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Schumann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Marx</surname></persName>
		</author>
		<idno type="DOI">10.24251/hicss.2021.697</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 54th Hawaii International Conference on System Sciences</title>
				<meeting>the 54th Hawaii International Conference on System Sciences<address><addrLine>Maui, Hawaii, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="5748" to="5757" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">The Hybrid GMDH-Neo-fuzzy Neural Network in Forecasting Problems in Financial Sphere</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Bodyanskiy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Boiko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zaychenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Hamidov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zelikman</surname></persName>
		</author>
		<idno type="DOI">10.1109/SAIC51296.2020.9239152</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of 2nd International Conference on System Analysis &amp; Intelligent Computing (SAIC)</title>
				<meeting>2nd International Conference on System Analysis &amp; Intelligent Computing (SAIC)<address><addrLine>Kyiv, Ukraine</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">A</forename><surname>Sonika</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pratap</surname></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">New technique for detecting fraudulent transactions using hybrid network consisting of full-counter propagation network and probabilistic network</title>
		<author>
			<persName><forename type="first">A</forename><surname>Chauhan</surname></persName>
		</author>
		<idno type="DOI">10.1109/CCAA.2016.7813713</idno>
	</analytic>
	<monogr>
		<title level="m">International Conference on Computing, Communication and Automation (ICCCA)</title>
				<meeting><address><addrLine>Greater Noida, India</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2016-04-30">2016. 29-30 April 2016. 2016</date>
			<biblScope unit="page" from="29" to="30" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Forecast Method for Audit Data Analysis by Modified Liquid State Machine</title>
		<author>
			<persName><forename type="first">T</forename><surname>Neskorodіeva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Fedorov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Izonin</surname></persName>
		</author>
		<ptr target="http://ceur-ws.org/Vol-2623/paper3.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 1st International Workshop on Intelligent Information Technologies &amp; Systems of Information Security (IntelITSIS 2020)</title>
		<title level="s">CEUR-WS</title>
		<meeting>the 1st International Workshop on Intelligent Information Technologies &amp; Systems of Information Security (IntelITSIS 2020)<address><addrLine>Khmelnytskyi, Ukraine</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020-06-12">10-12 June, 2020. 2020</date>
			<biblScope unit="volume">2623</biblScope>
			<biblScope unit="page" from="25" to="35" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Method for Automatic Analysis of Compliance of Expenses Data and the Enterprise Income by Neural Network Model of Forecast</title>
		<author>
			<persName><forename type="first">T</forename><surname>Neskorodіeva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Fedorov</surname></persName>
		</author>
		<ptr target="http://ceur-ws.org/Vol-2631/paper11.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2nd International Workshop on Modern Machine Learning Technologies and Data Science (MoMLeT&amp;DS-2020)</title>
		<title level="s">proceedings, CEUR-WS</title>
		<meeting>the 2nd International Workshop on Modern Machine Learning Technologies and Data Science (MoMLeT&amp;DS-2020)<address><addrLine>Lviv-Shatsk, Ukraine</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020-06-03">2-3 June, 2020. 2020</date>
			<biblScope unit="volume">2631</biblScope>
			<biblScope unit="page" from="145" to="158" />
		</imprint>
	</monogr>
	<note>Main Conference</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Generating text with recurrent neural networks</title>
		<author>
			<persName><forename type="first">I</forename><surname>Sutskever</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Martens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">E</forename><surname>Hinton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 28th International Conference on Machine Learning (ICML-11)</title>
				<meeting>the 28th International Conference on Machine Learning (ICML-11)</meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="1017" to="1024" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Recurrent neural networkbased language model</title>
		<author>
			<persName><forename type="first">T</forename><surname>Mikolov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Karafi´</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Burget</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Cernock'y</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Khudanpur</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Interspeech</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page">3</biblScope>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Translation modeling with bidirectional recurrent neural networks</title>
		<author>
			<persName><forename type="first">M</forename><surname>Sundermeyer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Conference on Empirical Methods on Natural Language Processing</title>
				<meeting>the Conference on Empirical Methods on Natural Language Processing</meeting>
		<imprint>
			<date type="published" when="2014-10">October, 2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Bidirectional Recurrent Neural Networks as Generative Models</title>
		<author>
			<persName><forename type="first">M</forename><surname>Berglund</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Raiko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Honkala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Kärkkäinen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vetek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">T</forename><surname>Karhunen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems</title>
				<editor>
			<persName><forename type="first">C</forename><surname>Cortes</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><forename type="middle">D</forename><surname>Lawrence</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><forename type="middle">D</forename><surname>Lee</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Sugiyama</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Garnett</surname></persName>
		</editor>
		<imprint>
			<publisher>Curran Associates, Inc</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="page" from="856" to="864" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">LSTM neural networks for language modeling</title>
		<author>
			<persName><forename type="first">M</forename><surname>Sundermeyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Schluter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Ney</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Thirteenth Annual Conference of the International Speech Communication Association</title>
				<imprint>
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Ghostwriter: using an LSTM for automatic rap lyric generation</title>
		<author>
			<persName><forename type="first">P</forename><surname>Potash</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Romanov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rumshisky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing</title>
				<meeting>the 2015 Conference on Empirical Methods in Natural Language Processing</meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="1919" to="1924" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature Representations</title>
		<author>
			<persName><forename type="first">E</forename><surname>Kiperwasser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Goldber</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Transactions of the Association for Computational Linguistics</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="313" to="327" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Framewise phoneme classification with bidirectional LSTM and other neural network architecture</title>
		<author>
			<persName><forename type="first">A</forename><surname>Graves</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Schmidhuber</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Networks</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="602" to="610" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">Empirical evaluation of gated recurrent neural networks on sequence modeling</title>
		<author>
			<persName><forename type="first">J</forename><surname>Chung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gulcehre</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Cho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1412.3555</idno>
		<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Dey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">M</forename><surname>Salem</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1701.05923</idno>
		<title level="m">Gate-Variants of Gated Recurrent Unit (GRU) Neural Networks</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Optimization and applications of echo state networks with leaky integrator neurons</title>
		<author>
			<persName><forename type="first">H</forename><surname>Jaeger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lukosevicius</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Popovici</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Siewert</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Networks</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="335" to="352" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Liquid state machines: motivation, theory, and applications</title>
		<author>
			<persName><forename type="first">W</forename><surname>Maass</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computability in context: computation and logic in the real world</title>
				<imprint>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="275" to="296" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Nakib</surname></persName>
		</author>
		<author>
			<persName><forename type="first">-</forename><forename type="middle">G</forename><surname>El</surname></persName>
		</author>
		<author>
			<persName><surname>Talbi</surname></persName>
		</author>
		<title level="m">Metaheuristics for Medicine and Biology</title>
				<meeting><address><addrLine>Berlin</addrLine></address></meeting>
		<imprint>
			<publisher>Springer-Verlag</publisher>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<author>
			<persName><forename type="first">X</forename><surname>Yang</surname></persName>
		</author>
		<title level="m">Nature-inspired algorithms and applied optimization</title>
				<meeting><address><addrLine>Charm</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Diagnostic Rule Mining Based on Artificial Immune System for a Case of Uneven Distribution of Classes in Sample</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Subbotin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Oliinyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Levashenko</surname></persName>
		</author>
		<author>
			<persName><surname>Zaitseva</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Communications</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="3" to="11" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Blum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">R</forename><surname>Raid</surname></persName>
		</author>
		<title level="m">Hybrid Metaheuristics, Powerful Tools for Optimization</title>
				<meeting><address><addrLine>Charm</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<title level="m" type="main">Optimization Techniques and Applications with Examples</title>
		<author>
			<persName><forename type="first">X.-S</forename><surname>Yang</surname></persName>
		</author>
		<idno type="DOI">10.1002/9781119490616</idno>
		<imprint>
			<date type="published" when="2018">2018</date>
			<publisher>Wiley &amp; Sons</publisher>
			<pubPlace>Hoboken, New Jersey</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<monogr>
		<title level="m" type="main">Handbook of Heuristics</title>
		<author>
			<persName><forename type="first">R</forename><surname>Martí</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">M</forename><surname>Pardalos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">G C</forename><surname>Resende</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-319-07124-4</idno>
		<imprint>
			<date type="published" when="2018">2018</date>
			<publisher>Springer</publisher>
			<pubPlace>Charm</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<monogr>
		<title level="m" type="main">Meta-heuristic and Evolutionary Algorithms for Engineering Optimization</title>
		<author>
			<persName><forename type="first">O</forename><surname>Bozorg-Haddad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Solgi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Loaiciga</surname></persName>
		</author>
		<idno type="DOI">10.1002/9781119387053</idno>
		<imprint>
			<date type="published" when="2017">2017</date>
			<publisher>Wiley &amp; Sons</publisher>
			<pubPlace>Hoboken, New Jersey</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<monogr>
		<title level="m" type="main">An Introduction to Metaheuristics for Optimization</title>
		<author>
			<persName><forename type="first">B</forename><surname>Chopard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tomassini</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-319-93073-2</idno>
		<imprint>
			<date type="published" when="2018">2018</date>
			<publisher>Springer</publisher>
			<pubPlace>New York</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Radosavljević</surname></persName>
		</author>
		<idno type="DOI">10.1049/pbpo131e</idno>
		<title level="m">Metaheuristic Optimization in Power Engineering</title>
				<meeting><address><addrLine>New York</addrLine></address></meeting>
		<imprint>
			<publisher>The Institution of Engineering and Technology</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
