<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Application of global optimization methods to increase the accuracy of classification in the data mining tasks</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Application of global optimization methods to increase the accuracy of classification in the data mining tasks</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">AA635683ED6481C3F5C3720E4D108CC1</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T08:31+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>data mining</term>
					<term>classification</term>
					<term>imbalance problem</term>
					<term>cost-sensitive learning</term>
					<term>imbalanced data</term>
					<term>principal components</term>
					<term>neural-like structure of successive geometric transformations model</term>
					<term>NLS SGTM</term>
					<term>simulated annealing</term>
					<term>analysis of the principal components</term>
					<term>optimization methods</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The article describes the solving of data mining task using neural-like structures of Successive Geometric Transformations Model (NLS SGTM). The main problems of this task are imbalanced dataset and different weigh of errors. Therefore, to take into account these features, the method of penalties and rewards was used, as well as a piecewise linear approach to classification. The supplement of the methods used by the final optimization procedure is proposed. The procedure of final optimization using simulated annealing.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>In the previous articles <ref type="bibr" target="#b0">[1]</ref><ref type="bibr" target="#b1">[2]</ref> the methods based on the combination of a Successive Geometric Transformations Model with the method of penalties and rewards was described. In addition, was developed the piecewise-linear approach to constructing separating surfaces in classification tasks.</p><p>The purpose of these methods is to solve the data mining tasks of the classification. The main features of these tasks are large-size datasets, imbalanced dataset and different weight of errors. The main goal of this research is increasing the accuracy of classification and minimize the number of penalty points.</p><p>In order to increase the accuracy of classification, we propose to supplement the developed methods with final optimization procedures, in particular by the method of random correction of decomposition elements and the method of simulated annealing.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Problem statement</head><p>Today the data mining tasks are widespread because most companies today have a huge amount of accumulated data with information about sales, customers, orders, and more. This information is a source of hidden knowledge. In turn, the possession of this knowledge allows this company to take a leading position in the market, to win the competitive struggle. Among such tasks, one of the most popular is the task of classification. These tasks are formulated daily; in such spheres of life how as commerce, telecommunication, and chemical industry, target marketing, insurance, medicine, bioinformatics, and others. Researchers use different methods to solve classification problems <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b3">4]</ref>.</p><p>The main features of classification tasks in data mining are imbalanced data, different weight of error, huge amounts of data. These features require using some special additional methods to well-known methods of classification to provide high accuracy of classification.</p><p>Hereby as basic methods of classification, we used the neural-like structure of successive geometric transformations model (NLS SGTM) <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6]</ref>. As an additional method was used piecewise-linear approach <ref type="bibr" target="#b0">[1]</ref> and cost-sensitive learning method <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8]</ref>. This allowed us to improve the classification accuracy and take into account the specifics of a specific task. This article proposes to apply global optimization methods to the neuro-like structure already trained as a result of previous experiments. This will allow us to find such parameters of the neural-like structure, in which the sum of points reaches the global max.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Increasing the accuracy of classification using random</head><p>correction of decomposition elements.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Analysis of the principal components</head><p>The analysis of the principal components is the standard method used to reduce the dimensionality of data in statistical pattern recognition system and signal processing systems. However, it is also advisable to use the analysis of the principal components to solve the problems of data mining because of their high dimensionality <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b9">10]</ref>.</p><p>The main task of statistical recognition is the allocation of attributes -the process in which the data space is transformed into space of attributes, which theoretically has the same dimension as the input space. Conversions, however, are usually performed so that a reduced number of the most effective features can represent the data space. Consequently, only a substantial part of the information contained in the data remains, the dimension of the data is reduced. If this approach is applied to data mining task, we will reduce the size of the input data by extracting non-informative features without losing significant data. Consider a more detailed analysis of the principal components (in the theory of information is known as Karhunen-Loeve Transform) <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12]</ref>.</p><p>Assume that there exists a vector X of dimension m, which we want to convey with the help of l numbers, where l &lt;m. If we simply cut the vector x, this will cause the mean square error to be equal to the sum of the dispersions of the elements carved out of the vector x. It is necessary to find some linear transformation T, for which the value of the mean square error for the reduction of the vector X will be optimal. In this case, the transformation T must have the property of a small dispersion for its individual components. The analysis of the principal components maximizes the rate of dispersion reduction and, accordingly, the probability of the correct choice <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b13">14]</ref>.</p><p>Let X be an m-dimensional random vector from the initial set of data. The mean value of this vector equal zero.</p><formula xml:id="formula_0">E(X)=0, (<label>1</label></formula><formula xml:id="formula_1">)</formula><p>where E is the operator of statistical expectation. If X have a nonzero average, then you can calculate this value before the analysis begins. Let q be a unit vector with dimension m, the vector X is projected on vector q. This projection is defined as the product of the vectors X and q: A=X T q=q T X ( 2 ) with restriction ||q||=(q T q) 0.5 = 1.</p><p>( 3 ) Projection A is a random variable that has an average value and variance that is related to random vector statistics X. Dispersion A equals</p><formula xml:id="formula_2">σ 2 =E[A 2 ]=E[(q T X) (X T q)]= q T E[(XX T ]q = q T Rq (4)</formula><p>The matrix R of dimension mm is the matrix of the correlation of the random vector X defined as the expectation of the product of the random vector X on itself:</p><formula xml:id="formula_3">R=E[XX T ]<label>( 5 )</label></formula><p>The matrix R is symmetric, then: R T = R . ( 6 ) It follows from ( <ref type="formula">6</ref>) that if a and b are arbitrary vectors with dimension m1 then: a T Rb=b T Ra (7) From equation ( <ref type="formula">4</ref>) it follows that the projection A is a function of an odd vector q. So, we have:</p><formula xml:id="formula_4">(q) =σ 2 = q T R q ,<label>( 8 )</label></formula><p>(q) -dispersion probe.</p><p>Directly the principal components are defined as follows. Let the data vector x be a realization of the random vector X. Since there are m possible values of the single vector q, then we must consider m possible projections of the data vector x. By formula (2) a j = q j T x=x T q j , j = 1,2,…,m (9) where a j -the projections of the vector x on the main guides, represented by single vectors q j . These projections are called principal components, their number corresponds to the dimension of the data vector x. In this case formula ( <ref type="formula">9</ref>) can be considered as the analysis procedure.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head> </head><p>Fig. <ref type="figure">1</ref>. Scheme of the procedure for analysis of the principal components. The encoding step</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Vector of input data</head><p>Encoding device</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Vector of the main components</head><p>Allocation of the principal components (PC) is also proceeding on the outputs of the hidden layer of the neural-like structure of GTM. After selecting the PC, we create and teach an additional neural-like structure (Fig. <ref type="figure" target="#fig_0">2</ref>), where the inputs are the vectors of the PC, and the outputs are the significance of the corresponding outputs of the initial training data <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b15">16,</ref><ref type="bibr" target="#b16">17]</ref>. </p><formula xml:id="formula_5">m i y y y i i       Get the elements of decomposition . ,..., 2 , 1 , , 0 0 m i PC z y z i i i     Initially we have    m i i z y 0 or    m i i i k z y 0 , (<label>1 0 )</label></formula><p>where</p><formula xml:id="formula_6">z i  the elements of decomposition, i k =1, i = 1,2,…, m .</formula><p>For classification tasks, appropriate indicators are optimized (percentage of properly classified specimens, number of penalty points, average arithmetic or mean square error) by random correction of elements of decomposition of coefficients i k .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Random correction of coefficients i</head><p>k can be performed simultaneously for all components or for each component independently. It should be noted that componentbased random correction is appropriate for prediction since it has been experimentally confirmed that the components are practically independent <ref type="bibr" target="#b17">[18,</ref><ref type="bibr" target="#b18">19]</ref>.</p><p>Consider a more detailed algorithm for random correction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Increasing the accuracy of the classification tasks based on correction of decomposition elements by random correction</head><p>Let us consider the case of optimizing the accuracy of the classification problem solving for a case where the recognized data belong to one of the two classes.</p><p>Re-coding the input file: outputs of the elements belonging to class 1, assign 1, the outputs of the elements belonging to class 2, assign -1.</p><p>The problem of recognition is solved as a prediction problem by the formula (3), but after obtaining a value, we analyze it by the formula <ref type="bibr" target="#b10">(11)</ref>:</p><formula xml:id="formula_7">      . 0 , 2 , 0 , 1 y class y class y ( 1 1 )</formula><p>For the task of recognizing the criteria by which optimization can be carried out, there may be the number of penalty points, the percentage of incorrectly classified representatives of class 1, the percentage of incorrectly classified representatives of class 2.</p><p>Then the percentage of incorrectly classified representatives of classes is calculated as follows: In order to calculate the number of penalty points, it is necessary to determine the fines that count for erroneous recognition. So if P1 is a fine charged if the element of class 1 is recognized as an element of class 2 and P2 is a fine charged if a class 2 element is recognized as an element of class 1.</p><formula xml:id="formula_8">ErrC1 =100% NWC1/NC1, ErrC2 =100% NWC2/NC2 (<label>12</label></formula><p>Then the value of the penalty function -the total number of received penalty points (PP) is:</p><p>PP=ErrC1P1+ErrC2P2. (13) Optimization method for classification tasks based on correction of decomposition elements by random correction:</p><p>1. Initial values k i =1.</p><p>2. We calculate the value of the error, which is optimization (PP, ErrC1 or ErrC2).</p><p>3. Using a random number generator with a uniform distribution, we choose the value ΔD from the range (-D, D).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Calculate new values</head><formula xml:id="formula_9">i k = i k + ΔD.</formula><p>5. Calculate the value of outputs with new i k .</p><p>6. Convert the resulting value to the designation of the class to which this element belongs.</p><p>7. Calculate the new value of the error, which is optimization (PP, ErrC1 or ErrC2).</p><p>8. Compare the value of the calculated error with the pre-calculated value. If the new value is less than the previous one, we will remember it as well as the current coefficients i k . Go to step 3. Otherwise, go to step 3 without remembering. 9. Continue optimization until the predefined desired optimization value is reached, or until the time t has expired.</p><p>After performing the optimization method by randomly correcting the decomposition elements, the resulting coefficients are used for further classification.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Application of global optimization methods to increase the accuracy of classification in the tasks of the data mining 4.1. Method of simulated annealing</head><p>Annealing method is an algorithmic analogue of the controlled cooling process. It was proposed in 1953 by N. Metropolis and refined by numerous followers. Today it is considered one of the few methods by which one can practically find the global minimum of functions of several variables. Consider a more detailed method of simulated annealing <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b20">21]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Algorithm of simulated annealing method:</head><p>1. Start the process from the starting point at a given initial temperature T=T max . 2. As long as T&gt;0, repeat L times the following actions:</p><p>• choose a new solution w ' from the vicinity w;</p><p>• calculate the change of target function; Δ = E(w')-E(w);</p><p>• if Δ≤0, take w = w ' ; otherwise, if Δ&gt;0, take w = w ' with probability exp(-Δ/T) by generating a random number R from the interval (0,1), then comparing it with the value exp(-Δ/T); if exp(-Δ/T) &gt; R, take a new solution w = w ' ; in other case -ignore it. 3. Reduce the temperature ( rT T  ) using the reduction coefficient r, selected from the interval (0,1) and return to step 2. 4. After lowering the temperature to zero, apply one of the deterministic methods (Levenberg-Marquardt algorithm, error-return algorithm, fastest-speed algorithm, etc.) to achieve the minimum of the target function.</p><p>The concept of "temperature" in this algorithm is quite formal, since the presented optimization model is only a mathematical analogy of the annealing process.</p><p>The efficiency of the annealing algorithm has an extremely high impact with the choice of parameters such as the initial temperature T max , the coefficient of reduction of temperature r and the number of cycles L, performed at each temperature level.</p><p>The main problem is to determine the threshold level optimal for each annealing simulation process. For some practical tasks, this level may have different meanings, but the overall range remains unchanged. As a rule, the initial temperature is selected so as to ensure the implementation of about 50% of the subsequent random changes in the solution. Therefore, knowledge of the pre-distribution of such changes makes it possible to estimate the initial temperature approximately.</p><p>Numerous computer experiments <ref type="bibr" target="#b21">[22]</ref> prove that in the case where the time limit is small, the best results give a single implementation. If simulation can be long lasting, then statistically better results can be achieved thanks to the multiple implementation of the annealing simulation if the value of the coefficient r is close to 1.</p><p>If we compare genetic algorithms with an annealing algorithm, then, in spite of the significant external difference between the algorithms, they are essentially similar in nature. An annealing algorithm according to <ref type="bibr" target="#b22">[23]</ref> can be considered a genetic algorithm with a population consisting of one instance. Consequently, an algorithm for simulating annealing of a metal can be regarded as an algorithm that has only a mutation operation, but not cross-linking.</p><p>In addition, if we compare these two algorithms from the applied point of view, then it should be noted that, according to Kohonen's study <ref type="bibr" target="#b6">[7]</ref>, in the case when the initial solution is sufficiently close to optimal, the annealing algorithm of the metal has significant advantages over the genetic algorithms from a computational point of view.</p><p>Since in our study the initial data are pre-processed by methods of fines and incentives with sampling alignment and piecewise linear classification on the basis of the model of geometric transformations, then the initial solution of the problem is sufficiently close to the optimal. Accordingly, in this case, it is more appropriate to choose for optimization of the solution.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Improvement of accuracy of the decision of tasks of intellectual data analysis on the basis of correction of elements of decomposition by algorithm of simulated annealing</head><p>Let's consider a more detailed algorithm of simulated annealing of metal in combination with methods of fines and incentives and piecewise linear classification on the basis of a model of geometric transformations.   The method of simulating annealing of a metal is proposed to be used to optimize weight coefficients so that the resulting amount of penalty points is minimal, that is, the optimization parameter is the amount of penalty points <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b16">17]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Fig. 4. The division tree into classes based on NLS SGTM</head><p>As can be seen from the flowchart describing the solution of the data mining problem by combining the method of fines and incentives, simulated annealing and piecewise linear approach, a modified annealing method for which the target function is the number of penalty points, will be applied separately for each cluster <ref type="bibr" target="#b21">[22,</ref><ref type="bibr">24]</ref>.</p><p>Accordingly, if we have a two-step division into clusters for a sample of n classes, then we will have division into clusters, which is depicted in Fig. <ref type="figure">4</ref>, and for each of the clusters a modified annealing algorithm will be implemented.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Experimental results</head><p>This article describes the solving of classification task, which was formulated in <ref type="bibr" target="#b0">[1]</ref>. The training sample describes the transactions carried out by credit card holders within two days and consists of 284,807 lines and 31 columns Also, the dataset contains one target feature 'Class', which shows the client's affiliation to one of two classes -frauds or ordinary clients. The main feature of the dataset is that the data set is highly unbalanced -only 492 transactions out of 284807 (0.172% of all transactions) have the value of the target field 1, that is, customers are fraudulent. The dataset has been collected and analyzed during a research collaboration of Worldline and the Machine Learning Group (http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection <ref type="bibr" target="#b6">[7]</ref>.</p><p>According to the subject area, a matrix was formed. By analyzing this matrix, it can be seen that a properly classified vector that belongs to the "fraud" class has a much greater weight than a properly classified "ordinary client" vector. At the same time, the case where an ordinary customer is classified as fraud has the highest number of penalty points (Table <ref type="table">1</ref>).</p><p>Then we used the modified method of imitation of annealing of the metal with parameters: initial temperature T=Tmax=20895, L=100, R is a random number from the interval (0,1), r=0,9.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1. Matrix of penalties and rewards solving task</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Matrix of Rewards and Penalties</head><p>Values of Rewards and Penalties</p><p>The vector is recognized as class 1</p><p>The vector is recognized as class 2</p><p>The vector belongs to class 1 1 -3</p><p>The vector belongs to class 2 -2 5</p><p>After lowering the temperature to zero, apply one of the deterministic methods (Levenberg-Marquardt algorithm, error-return algorithm, fastest-speed algorithm, etc.) to achieve the minimum of the target function. The results of classification are described (Fig 5, Fig. <ref type="figure">6</ref>). Fig. <ref type="figure">5</ref>. Results of classification using NLS SGTM with the method of penalties and rewards and the method of simulated annealing (in vectors) Fig. <ref type="figure">6</ref>. Results of classification using NLS SGTM with the method of penalties and rewards and the method of simulated annealing (in points)</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion</head><p>The application of global optimization methods to classification in the data mining tasks allowed to increase the accuracy of classification, especially in combination with other methods of classification, such as neural-like structure of successive geometric transformations model. Also, the method of simulated annealing successfully combined with such methods as piecewise-linear approach and cost-sensitive learning method. The application of the method of simulated annealing made it possible to reach the point of the global maximum and minimize the amount of penalty points for this task.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 2 .</head><label>2</label><figDesc>Fig. 2. The structure of neural network for obtaining decomposition elements The feature of this neural network is the representation of the output of the synapse</figDesc><graphic coords="4,125.28,207.36,344.64,114.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>) where NC1 is the number of representatives in class 1 in the training sample, NC2number of representatives of class 2 in the training sample, NWC1 -number of incorrectly classified representatives of class 1, NWC2 -number of incorrectly classified representatives of class 2.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 3 </head><label>3</label><figDesc>depicts the structural scheme of the developed neural network based on the model of geometric transformations, where classification objects -input data, PC 1 , PC 2 , ..., PC n  the principal components derived from input data, weight coefficients, y  an output that indicate on belonging to certain classes.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Fig. 3 .</head><label>3</label><figDesc>Fig. 3. The scheme of neural-like structure GTM Functioning of such a neural network can be described by the formula 14.</figDesc><graphic coords="8,156.36,147.36,282.48,133.20" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Piecewise-Linear Approach to Classification Based on Geometrical Transformation Model for Imbalanced Dataset</title>
		<author>
			<persName><forename type="first">A</forename><surname>Doroshenko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2018 IEEE Second International Conference on Data Stream Mining &amp; Processing (DSMP)</title>
				<meeting>the 2018 IEEE Second International Conference on Data Stream Mining &amp; Processing (DSMP)<address><addrLine>Lviv</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="231" to="235" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Classification of Imbalanced Classes using the Committee of Neural Networks</title>
		<author>
			<persName><forename type="first">R</forename><surname>Tkachenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Doroshenko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the XIIIth International Scientific and Technical Conference Computer Sciences and Information Technologies (CSIT)</title>
				<meeting>the XIIIth International Scientific and Technical Conference Computer Sciences and Information Technologies (CSIT)<address><addrLine>Lviv</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018-09-14">11-14 September 2018. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Alloys selection based on the supervised learning technique for design of biocompatible medical materials</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">L</forename><surname>Tepla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">V</forename><surname>Izonin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><forename type="middle">А</forename><surname>Duriagina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">О</forename><surname>Tkachenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">А</forename><forename type="middle">М</forename><surname>Trostianchyn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">І</forename><forename type="middle">А</forename><surname>Lemishka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">V</forename><surname>Kulyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Т</forename><forename type="middle">М</forename><surname>Kovbasyuk</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Archives of Materials Science and Engineering</title>
		<imprint>
			<biblScope unit="volume">93</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="32" to="40" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">The Combined Use of the Wiener Polynomial and SVM for Material Classification Task in Medical Implants Production</title>
		<author>
			<persName><forename type="first">I</forename><surname>Izonin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Trostianchyn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Duriagina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Tkachenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Tepla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Lotoshynska</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Intelligent Systems and Applications (IJISA)</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="issue">9</biblScope>
			<biblScope unit="page" from="40" to="47" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Neurolike networks on the basis of Geometrical Transformation Machine</title>
		<author>
			<persName><forename type="first">R</forename><surname>Tkachenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Yurchak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Polishchuk</surname></persName>
		</author>
		<idno type="DOI">10.1109/MEMSTECH.2008.4558743</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2008 International Conference on Perspective Technologies and Methods in MEMS Design</title>
				<meeting>the 2008 International Conference on Perspective Technologies and Methods in MEMS Design<address><addrLine>Polyana</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="77" to="80" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Features of the autoassociative neuro like structures of the geometrical transformation machine (GTM)</title>
		<author>
			<persName><forename type="first">U</forename><surname>Polishchuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Tkachenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Tkachenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Yurchak</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2009 5th International Conference on Perspective Technologies and Methods in MEMS Design, Zakarpattya</title>
				<meeting>the 2009 5th International Conference on Perspective Technologies and Methods in MEMS Design, Zakarpattya</meeting>
		<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="66" to="67" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Calibrating probability with underdamping for unbalanced classification</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">D</forename><surname>Pozzolo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Caelen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Johnson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Bontempi</surname></persName>
		</author>
		<idno type="DOI">10.1109/SSCI.2015.33</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2015 IEEE Symposium Series on Computational Intelligence</title>
				<meeting>the 2015 IEEE Symposium Series on Computational Intelligence<address><addrLine>Cape Town</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="159" to="166" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A Benefit-Cost Based Method for Cost-Sensitive Decision Trees</title>
		<author>
			<persName><forename type="first">X</forename><surname>Liu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the WRI Global Congress on Intelligent Systems</title>
				<meeting>the WRI Global Congress on Intelligent Systems<address><addrLine>Xiamen</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="463" to="467" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Feature extraction using PCA and Kernel-PCA for face recognition</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">M</forename><surname>Ebied</surname></persName>
		</author>
		<idno>MM-72-MM-77</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2012 8th International Conference on Informatics and Systems (INFOS)</title>
				<meeting>the 2012 8th International Conference on Informatics and Systems (INFOS)<address><addrLine>Cairo</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Quasi-Relief Method of Informative Features Selection for Classification</title>
		<author>
			<persName><forename type="first">S</forename><surname>Subbotin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2018 IEEE 13th International Scientific and Technical Conference on Computer Sciences and Information Technologies (CSIT)</title>
				<meeting>the 2018 IEEE 13th International Scientific and Technical Conference on Computer Sciences and Information Technologies (CSIT)<address><addrLine>Lviv</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="318" to="321" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A greedy approach to the distributed Karhunen-Loève transform</title>
		<author>
			<persName><forename type="first">A</forename><surname>Amar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Leshem</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gastpar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2010 IEEE International Conference on Acoustics, Speech and Signal Processing</title>
				<meeting>the 2010 IEEE International Conference on Acoustics, Speech and Signal Processing<address><addrLine>Dallas, TX</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="2970" to="2973" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">An explicit algorithm for training support vector machines</title>
		<author>
			<persName><forename type="first">D</forename><surname>Mattera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Palmieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Haykin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Proceedings of the IEEE Signal Processing Letters</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">9</biblScope>
			<biblScope unit="page" from="243" to="245" />
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Synthesis of optimal recovery systems in distributed computing using ideal ring bundles</title>
		<author>
			<persName><forename type="first">O</forename><surname>Riznyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Yurchak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Povshuk</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2016 XII International Conference on Perspective Technologies and Methods in MEMS Design (MEMSTECH)</title>
				<meeting>the 2016 XII International Conference on Perspective Technologies and Methods in MEMS Design (MEMSTECH)<address><addrLine>Lviv</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="220" to="222" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Classification rule mining using feature selection and genetic algorithm</title>
		<author>
			<persName><forename type="first">Xin</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><surname>Qian</surname></persName>
		</author>
		<author>
			<persName><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ziqiang</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2009 Asia-Pacific Conference on Computational Intelligence and Industrial Applications</title>
				<meeting>the 2009 Asia-Pacific Conference on Computational Intelligence and Industrial Applications<address><addrLine>Wuhan</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="107" to="110" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">A Fuzzy Clustering Approach Using Reward and Penalty Functions</title>
		<author>
			<persName><forename type="first">S</forename><surname>Yue</surname></persName>
		</author>
		<author>
			<persName><surname>Kaizhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2009 Sixth International Conference on Fuzzy Systems and Knowledge Discovery</title>
				<meeting>the 2009 Sixth International Conference on Fuzzy Systems and Knowledge Discovery<address><addrLine>Tianjin</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="18" to="21" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Remote Sensing Textual Image Classification based on Ensemble Learning</title>
		<author>
			<persName><forename type="first">Ye</forename><surname>Zhiwei</surname></persName>
		</author>
		<author>
			<persName><surname>Juan</surname></persName>
		</author>
		<author>
			<persName><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><surname>Zhengbing</surname></persName>
		</author>
		<author>
			<persName><surname>Hu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Image, Graphics and Signal Processing(IJIGSP)</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="21" to="29" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Noniterative Neural-like Predictor for Solar Energy in Libya</title>
		<author>
			<persName><forename type="first">R</forename><surname>Tkachenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Cutucu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Izonin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Doroshenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Tsymbal</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 14-th International Conference ICTERI 2018. Volume I: Main Conference</title>
				<editor>
			<persName><forename type="first">V</forename><surname>Ermolayev</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Suárez-Figueroa</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">V</forename><surname>Yakovyna</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Mayr</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Nikitchenko</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Spivakovsky</surname></persName>
		</editor>
		<meeting>the 14-th International Conference ICTERI 2018. Volume I: Main Conference<address><addrLine>Kyiv, Ukraine</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2019. May 14-17. 2018</date>
			<biblScope unit="page" from="35" to="45" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Parametrical synthesis of neural network models based on the evolutionary optimization</title>
		<author>
			<persName><forename type="first">A</forename><surname>Oleynik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Subbotin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2009 10th International Conference The Experience of Designing and Application of CAD Systems in Microelectronics</title>
				<meeting>the 2009 10th International Conference The Experience of Designing and Application of CAD Systems in Microelectronics<address><addrLine>Lviv-Polyana</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="335" to="338" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Iterative annealing: a new efficient optimization method for cellular neural networks</title>
		<author>
			<persName><forename type="first">D</forename><surname>Feiden</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Tetzlaff</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2001 International Conference on Image Processing (Cat. No.01CH37205)</title>
				<meeting>the 2001 International Conference on Image Processing (Cat. No.01CH37205)<address><addrLine>Thessaloniki, Greece</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2001">2001</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="549" to="552" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Simulated annealing: A proof of convergence</title>
		<author>
			<persName><forename type="first">V</forename><surname>Granville</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Krivanek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-P</forename><surname>Rasson</surname></persName>
		</author>
		<idno type="DOI">10.1109/34.295910</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="652" to="656" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Very fast simulated annealing for pattern detection and seismic applications</title>
		<author>
			<persName><forename type="first">K</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Hsieh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE International Geoscience and Remote Sensing Symposium</title>
				<meeting>the IEEE International Geoscience and Remote Sensing Symposium<address><addrLine>Vancouver, BC</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="499" to="502" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Data Reduction by Genetic Algorithms and Non-Algebraic Feature Construction: A Case Study</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">S</forename><surname>Shafti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Pérez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2008 Eighth International Conference on Hybrid Intelligent Systems</title>
				<meeting>the 2008 Eighth International Conference on Hybrid Intelligent Systems<address><addrLine>Barcelona</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="573" to="578" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">An Ensemble of Adaptive Neuro-Fuzzy Kohonen Networks for Online Data Stream Fuzzy Clustering</title>
		<author>
			<persName><forename type="first">Hu</forename><surname>Zhengbing</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">V</forename><surname>Bodyanskiy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">K</forename><surname>Tyshchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">O</forename><surname>Boiko</surname></persName>
		</author>
		<idno type="DOI">10.5815/ijmecs.2016.05.02</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal of Modern Education and Computer Science(IJMECS)</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="12" to="18" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
