<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Finding Anomalies in the Operation of Automated Control Systems Using Machine Learning</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Yurii</forename><surname>Hodlevskyi</surname></persName>
							<email>godlevskiy.yuriy@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">Infopulse Ukraine</orgName>
								<address>
									<addrLine>13a, Trypilska Street</addrLine>
									<postCode>10003</postCode>
									<settlement>Zhytomyr</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Tetiana</forename><surname>Vakaliuk</surname></persName>
							<email>tetianavakaliuk@gmail.com</email>
							<affiliation key="aff1">
								<orgName type="institution">Zhytomyr Polytechnic State University</orgName>
								<address>
									<addrLine>103 Chudnivsyka Str</addrLine>
									<postCode>10005</postCode>
									<settlement>Zhytomyr</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
							<affiliation key="aff2">
								<orgName type="department">Institute for Digitalisation of Education</orgName>
								<orgName type="institution">NAES of Ukraine</orgName>
								<address>
									<addrLine>9 M. Berlynskoho Str</addrLine>
									<postCode>04060</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
							<affiliation key="aff3">
								<orgName type="institution">Kryvyi Rih State Pedagogical University</orgName>
								<address>
									<addrLine>54 Gagarin Ave</addrLine>
									<postCode>50086</postCode>
									<settlement>Kryvyi Rih</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Oleksii</forename><surname>Chyzhmotria</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">Zhytomyr Polytechnic State University</orgName>
								<address>
									<addrLine>103 Chudnivsyka Str</addrLine>
									<postCode>10005</postCode>
									<settlement>Zhytomyr</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Olena</forename><surname>Chyzhmotria</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">Zhytomyr Polytechnic State University</orgName>
								<address>
									<addrLine>103 Chudnivsyka Str</addrLine>
									<postCode>10005</postCode>
									<settlement>Zhytomyr</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Oleh</forename><surname>Vlasenko</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">Zhytomyr Polytechnic State University</orgName>
								<address>
									<addrLine>103 Chudnivsyka Str</addrLine>
									<postCode>10005</postCode>
									<settlement>Zhytomyr</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff4">
								<orgName type="department">International Workshop on Intelligent Information Technologies and Systems of Information Security</orgName>
								<address>
									<addrLine>March 22-24</addrLine>
									<postCode>2023</postCode>
									<settlement>Khmelnytskyi</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Finding Anomalies in the Operation of Automated Control Systems Using Machine Learning</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">089A730458DDEDB9D627CD1BEFCB8C5B</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-04-29T06:45+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Machine Learning</term>
					<term>Gradient Descent</term>
					<term>Learning Algorithm</term>
					<term>Adaptive Movement Estimation</term>
					<term>Long Short-term Memory</term>
					<term>Diapason</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This article deals with the problem of detecting anomalies in the operation of automated systems. Anomaly detection is useful for preventing breakdowns and improving the performance of automated systems. Using various sensors in automated systems, it is possible to read information about the state of certain system parameters, which in turn helps to monitor the state of the system at the moment. But simply viewing system indicators is not a very optimal option, as human resources are wasted. It is also possible to set the limit values of the sensors and if some indicator goes beyond them, the system can show a message about it. Still, not all graphs can do with this solution to the problem, because abnormal values can be recorded in these limits and ignored by such a system, or the sensor can change the range of work and in this case, thus operators will receive a large number of false messages. In this way, it is possible to implement a system that will detect anomalies automatically using artificial intelligence, which will learn from existing previous data and notify the operator of a malfunction.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The problem of detecting anomalies in complex automated control systems is widespread enough in various spheres of human activity. These problems can be tried to solve in different ways, regardless of the budget and the ability to maintain various systems.</p><p>Let's take as an example electricity generator stations in hard-to-reach places on our planet. The work control system is complicated, for example, due to weather conditions and keeping personnel in hard-to-reach places. Including the complexity of controlling data coming back from different sensors that return different values in different components. You can try to troubleshoot this problem in several ways.</p><p>The most obvious of these is 24/7 monitoring of high performance using staff. But solving the problem in this way increases the number of personnel, complicates the work schedule, and creates a human factor risk.</p><p>Another solution is a fixed minimum and maximum value for the sensors. But the problems of this approach are the difficulty of setting the values for each of the sensors separately and updating the values for each separately in case of changing the operation diapason of the sensors, which slows down and complicates the operation of such systems.</p><p>One of the options is to create an application for finding anomalies in the operation of the automated control system using machine learning. Thanks to machine learning, the system will automatically adapt to the variety of diapasons and values being looked at.</p><p>In this application, it is planned to use data analysis methods in the LSTM neural network architecture to implement the search for anomalous values in the operation of the automated system, which return the values of different diapasons with different periodicity. The peculiarity of this search for anomalies is the adaptation of programs to different diapasons, and the search for anomalies in various zones, which expands the capabilities of the program supplement for various areas.</p><p>Objective -development of a system that can be used to control anomalies in the operation of automated control systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Works</head><p>Benjamin Lindemann, Benjamin Maschler, Nada Sahlab, Michael Weyrich in their research describe an overview of promising LSTM based approaches for anomaly detection with an additional focus on upcoming graph-based and transfer learning approaches. All approaches are evaluated based on a set of application-oriented criteria such as the detection capabilities regarding temporal anomalies, achieved accuracies, and use cases addressed in the original publication. They present some use cases which can be useful in different areas but not examples of applications that can solve the real problem of detecting anomalies on different types of power stations <ref type="bibr" target="#b12">[13]</ref>.</p><p>Gian Antonio Susto, Matteo Terzi, Alessandro Beghi describe an application which was checked anomaly detection strategies that have been tested on a real industrial dataset related to a Semiconductor Manufacturing Etching process. They show the results of their application which is a powerful tool but only for one limited area. It is not scalable for some other processes where you can customize your application for different manufacturing processes <ref type="bibr" target="#b13">[14]</ref>.</p><p>Zhe Li, Jingyue Li, Yi Wang, Kesheng Wang proposed a novel deep learning-based method for anomaly detection in mechanical equipment by combining two types of deep learning architectures, stacked autoencoders (SAE) and long short-term memory (LSTM) neural networks, to identify anomaly condition in a completely unsupervised manner. They made an experiment for anomaly detection in rotary machinery through wavelet packet decomposition (WPD) and data-driven models demonstrated the efficiency and stability of the proposed approach. But their work can be useful only for rotary machinery equipment which is not also scalable for other types of equipment <ref type="bibr" target="#b15">[16]</ref>.</p><p>Yujie Wang; Xin Du; Zhihui Lu; Qiang Duan; Jie Wu improved LSTM model to detect anomalies in equipment in rail transit systems. But their solution was implemented only for one problem and it is not scalable for other fields <ref type="bibr" target="#b17">[18]</ref>. Mahe Zabin, Ho-Jin Choi, Jia Uddin presented hybrid DTL architecture comprising a deep convolutional neural network and long short-term memory layers for extracting both temporal and spatial features enhanced by Hilbert transform 2D images. They proposed a new customization of the model but not the application that is ready and scalable to solve problems <ref type="bibr" target="#b20">[21]</ref>.</p><p>Preeti Rajni Bala, Ram Pal Singh in their research considered the theoretical material for the analysis of time series, and also elaborated the information about the neural network with long and short-term memory <ref type="bibr" target="#b2">[3]</ref>. Sheng Xiang, Yi Qin, Caichao Zhu, Yangyang Wang, Haizhou Chen presented material on the use cases of neural networks with long and short-term memory. Considered the use of this network in the example of predicting the resource of mechanical equipment <ref type="bibr" target="#b3">[4]</ref>. The same scientists investigated the use of the network with long and short-term memory at a deeper level with the control state of the wear of the gears of mechanical equipment, in addition, they considered the basic concepts of the algorithms of this neural network <ref type="bibr" target="#b4">[5]</ref>.</p><p>Anuraganand Sharma conducted a review of existing neural network optimization methods. Analyzing the advantages and disadvantages of various modifications of gradient descent, he made an overview of the variation of gradient descent -stochastic gradient descent <ref type="bibr" target="#b5">[6]</ref>. Farajtabar M., Azizan N., Mott A., Li A. considered variations of gradient descent and investigated superficial information about orthogonal gradient descent for continuous learning <ref type="bibr" target="#b6">[7]</ref>. Azizan, Navid &amp; Hassibi, Babak reviewed and investigated variational gradient descent as well as stochastic gradient descent <ref type="bibr" target="#b7">[8]</ref>. Simon Du, Jason Lee, Haochuan Li, Liwei Wang, Xiyu Zhai investigated the problem of the local minimum, as well as the possibility of using various methods to find the global minimum <ref type="bibr" target="#b8">[9]</ref>. Lee, J.D., Simchowitz, M., Jordan, M.I., Recht, B. conducted an introduction to the problem of saddle points in the gradient descent method, and tried to optimize the usual gradient descent <ref type="bibr" target="#b9">[10]</ref>. Soukup D., Cejka T., Hynek K. listed options for using machine learning methods to detect anomalies in various areas, and conducted an overview of anomaly detection in computer networks <ref type="bibr" target="#b10">[11]</ref>. Grcić, M., Bevandić, P., Šegvić, S. provided an introduction to hybrid anomaly detection for dense open dataset recognition and a review of existing methods for dataset anomaly detection <ref type="bibr" target="#b11">[12]</ref>.</p><p>Based on reviewed articles the main problem is that all related works had a solution for one area and were not scalable. As result, it is necessary to create a common solution that can be used in different areas with scalable functionality which is ready to detect anomalies in almost any automation system.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Models &amp; Methods &amp; Technology 3.1. Analysis of problems and features of the anomaly detection process</head><p>Anomaly detection (or outlier detection) is the identification of rare items, events, or observations that are suspicious because of significant differences from the majority of the data. Typically, abnormal data can be related to some problem or rare event, such as bank fraud, medical problems, structural defects, malfunctioning equipment, etc. This relationship makes it very interesting to be able to choose which data points can be considered anomalies, as identifying these events is usually very interesting from a business perspective.</p><p>Any machine, whether rotating (pump, compressor, gas or steam turbine, etc.) or non-rotating machine (heat exchanger, distillation column, valve, etc.) will eventually reach a point of breakage. This point may not be an actual failure or shutdown, but the point at which the equipment is no longer operating in an optimal state. It means that some maintenance may be required to restore its full operating potential. Simply put, determining the "health" of our equipment is the realm of health monitoring.</p><p>The most common way to perform condition monitoring is to look at each machine sensor measurement and set a minimum and maximum value limit for it. If the current value is within the limits, then the machine is working. If the current value is out of range, it means the machine is faulty and an alarm is sent. But this procedure for setting hard-coded alarm limits is known to send a large number of false alarms, that is, alarms for situations that are actually healthy for the machine. There are also no alarms, i.e. situations that are problematic but not alarming. The first problem leads to an additional expenditure of time and effort. The second problem is more important because it leads to real damage along with repair costs and loss of productivity.</p><p>Both problems can be the result of the same cause: human error. Even if you seat several operators to view all sensor values, the station will not achieve energy-efficient monitoring of the equipment, because a human may miss a value, and it takes a lot of time to view volumetric data. That is why it is an optimal case to use automated methods of finding anomalies. In this way, you can avoid human error, and focus the attention of service personnel on the anomalies already found by the application, instead of reviewing the entire volume of data.</p><p>Let's imagine a certain station, each of them, with a high probability, will have certain equipment with completely different parameters, units of measurement, number of sensors, etc. In this way, the issue of controlling all the equipment, keeping reports, and the process of detecting abnormal values is complicated, if we take into account the proportional growth of stations to the growth of personnel. As a result, with the increase in equipment, the process of detecting incorrect behavior of certain automated systems becomes more and more difficult.</p><p>Usually, in this case, more personnel are hired, and more operators keep reports and try to visually detect the incorrect operation of this or that automated system. Or various applications are being developed that are configured specifically for certain automated systems. Therefore, it should be noted that the versatility of use plays a very important role in the further design and further development of the software product, as the above-mentioned different equipment have a completely different set of sensors, from simple temperature sensors to complex sensors that can measure pressure, voltage, speed, etc.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Selection of tools of implementation of the program</head><p>Many different methods are used for data analysis, but it is advisable to approach the choice reasonably, as for each problem there is a more suitable solution that will affect the further operation of the software application and how the result can affect the operation of the business itself.</p><p>An LSTM neural network is well-suited for anomaly detection. LSTM (long short-term memory) is an artificial neural network used in the field of artificial intelligence and deep learning. Unlike standard feedforward neural networks, LSTM has feedback. Such a recurrent neural network (RNN) can process not only individual data points (such as images), but also entire data sequences (such as speech or video). Due to the fact that this neural network can remember sequences of data, it helps to determine the behavior of a certain sensor and further check whether subsequent data matches this behavior. This means that this neural network can be used for many completely different graphs, which have a certain consistent behavior that helps to unify the application for use in completely different equipment.</p><p>Unlike any other data analysis method, LSTM is very good at detecting outliers in a graph. But in order to conduct data analysis, you first need to prepare the data. First, you need to standardize the data. This is necessary for the correct operation of the neural network.</p><p>The standard evaluation of sample x is calculated by the formula z = (x -u) / s, where u is the mean of the training samples and s is the standard deviation of the training samples. Centering and scaling are performed independently for each feature by computing appropriate statistics on the samples in the training set. The mean and standard deviation are then stored for further use in the data using a transformation.</p><p>Dataset standardization is a common requirement for many machine learning estimators: they can perform poorly if individual features do not in some way resemble standard normally distributed data (eg, a Gaussian with zero mean and unit variance).</p><p>After the standardization of the data set, it should be divided into training and test samples. Usually, the sample for train data is 70% of the total number, and for testing -30%. The training data is used to train the model. Tests for checking the trained model.</p><p>In Figure <ref type="figure" target="#fig_7">1</ref>, we can see the sensor data, which cannot be analyzed for anomalies using a fixed minimum and maximum.</p><p>Thanks to this neural network, we can work with different graphs, even with similar graphs, which are shown in Figure <ref type="figure" target="#fig_7">1</ref>, where it is impossible to simply set the minimum and maximum, beyond which the value of the sensor cannot go, since there is a certain interval sequence, decreasing the interval, and increasing it. An example of an anomaly on a similar sequence can be the examples shown in Figure <ref type="figure" target="#fig_8">2</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Development of an anomaly analyzer using a neural network</head><p>Python should be used as the main programming language. For implementation, it is suggested to use the following modules:</p><p>• scikit-learn -Python module for machile learning, based on SciPy.</p><p>• pandas -a software library is written for the Python programming language for data manipulation and analysis. In particular, it offers data structures and operations for manipulating numerical tables and time series.</p><p>• Numpy -module that adds support for large multidimensional arrays and matrices, along with a large library of high-level math functions for manipulating these arrays.</p><p>• Plotly -for data visualization.</p><p>• TensorFlow -for model realization. Let's take the LSTM neural network as a basis -it is an artificial neural network, the advantages of which are that it avoids the problems of explosion or fading of the gradient since it does not change the weights of the paths as with the usual method of back error propagation. In this way, we can analyze quite different data graphs, and prepare a fairly universal software application that will be useful in many areas.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Neural network</head><p>To understand the work of the LSTM neural network, let's first familiarize ourselves with its main components, Figure <ref type="figure" target="#fig_1">3</ref> shows the basic structure of the LSTM. A sigmoid is a continuously differentiable monotonic non-linear S-shaped function that is often used to "smooth" the values of some quantity. It is determined by the formula:</p><formula xml:id="formula_0">𝑆𝑆(𝑥𝑥) = 1 1 + 𝑒𝑒 −𝑥𝑥 (1)</formula><p>With the help of sigmoids, it is easy to get a number in the range from 1 to 0 depending on the number that will be input as x. The graphic view of the sigmoid is shown in Figure <ref type="figure" target="#fig_10">4</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 4: Sigmoid function</head><p>The notation "tanh" means the hyperbolic tangent function, which returns a result in the range from -1 to 1. The hyperbolic tangent function is the hyperbolic counterpart of the circular tangent function, which is used throughout trigonometry. The graphic representation is shown in Figure <ref type="figure" target="#fig_2">5</ref>.</p><p>It is determined by the formula:</p><formula xml:id="formula_1">𝑓𝑓(𝑥𝑥) = (𝑒𝑒 𝑥𝑥 − 𝑒𝑒 −𝑥𝑥 ) (𝑒𝑒 𝑥𝑥 + 𝑒𝑒 −𝑥𝑥 )<label>(2)</label></formula><p>Conventional notation "+" and "×" denote the operations of addition and multiplication, respectively. Having analyzed the symbols on the general diagram, let's move on to a detailed review of the work of the LSTM neural network.     </p><p>Obtained in this way ht і Ct are passed down the chain. Iterative gradient descent with error backpropagation could be used to minimize the total error, but the main problem with gradient descent for standard recurrent neural networks is that the error gradients decrease at an exponential rate as the time delay between important events increases, which was discovered in 1991 <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. With LSTM blocks, however, as the error magnitudes propagate backward from the source layer, the error appears locked in the block's memory. Thus, regular error backpropagation is effective for training an LSTM block to remember values for very long time intervals.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.5.">Dataset creation</head><p>For a real check of the algorithm, it is desirable to have real equipment that would return certain values, but since the equipment can be different, it was decided to analyze Internet resources and find out what types of graphs can be returned from automated systems.</p><p>After analyzing Internet resources, it was decided to generate data in a certain range, periodic data with repetition of oscillations, and sinusoidal data for examples. Data in a certain range are shown in Figure <ref type="figure" target="#fig_16">10</ref>. An example of periodic data is shown in Figure <ref type="figure" target="#fig_17">11</ref>. An example of sinusoidal data is shown in Figure <ref type="figure" target="#fig_0">12</ref>.   </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.6.">Preparation for data analysis</head><p>To work with artificial intelligence algorithms, you need to prepare data in the appropriate format. After generating the dataset and saving the data in a csv file, you should select only the necessary lines and perform data standardization.</p><p>Data standardization is the process of converting data into a common format so that analysts can process and analyze it. Most organizations use data from multiple sources; this can include onpremises data storage, cloud storage, and various databases. However, data from different sources can be problematic if it is not homogeneous, leading to difficulties later (for example, when you use this data to create dashboards, visualizations, etc.).</p><p>Data standardization is critical for many reasons. Above all, it helps establish clear, coherently defined elements and attributes, providing a complete catalog of your data. No matter what statistics we're trying to get or what problems we're trying to solve, getting the data right is an important starting point.</p><p>This requires converting this data into a single format with logical and consistent definitions. These definitions will form metadata-labels that identify different aspects of the data. This is the basis of the data standardization process.</p><p>In terms of accuracy, standardizing the way data is labeled will improve access to the most relevant pieces of information. This will simplify analytics and reporting. It is calculated according to the formula:</p><formula xml:id="formula_3">𝑍𝑍 = 𝑥𝑥 − 𝜇𝜇 𝜎𝜎<label>(7)</label></formula><p>In the formula mentioned above, x is a point from the dataset, µ is the arithmetic mean of the dataset, σ is the standard deviation of the dataset.</p><p>The formula for calculating the standard deviation can be presented as:</p><formula xml:id="formula_4">� ∑(𝑥𝑥 𝑖𝑖 − 𝑥𝑥 �) 2 𝑡𝑡 − 1 (8)</formula><p>where 𝑥𝑥̅ -is the arithmetic mean of the dataset, n -is the number of values of the dataset, xi -is the ith value from the dataset.</p><p>After the standardization of the data set, it should be divided into training and test samples. Usually, the sample for training is 70% of the total number, and for testing -30%. The training data is used to train the model. Tests for checking the trained model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.7.">Interface</head><p>After successful registration, the user can familiarize himself with the main functionality of the software application. On the stations page, a user with manager rights can create stations, change or delete one of the stations. The station page is shown in Figure <ref type="figure" target="#fig_18">13</ref>. Before the user creates station equipment and equipment sensors, it is necessary to create equipment types, which are actually a certain automated system of this station. Since different equipment can have similar properties at different stations these are common features that should be included in the type of equipment. The same logic should be used when creating types for sensors, since, for example, temperature sensors can be installed on different equipment and it does not make sense to create the same information for different types of equipment every time. The equipment types page is shown in Figure <ref type="figure" target="#fig_10">14</ref>. The sensor types page is similar to the previous one.  After creating equipment types and sensor types, the user can start creating equipments and sensors using pre-prepared types. The user can select the appropriate type from the drop-down menu shown in Figure <ref type="figure" target="#fig_11">15</ref>. The logic of the work of creating sensors is similar, with the exception of the choice of the type of graph and the presence or absence of anomalies, since the dataset is generated, and not taken from a real automated system, since there is no access to such at the development stage.</p><p>After that user can check the equipment list with all the necessary details, and edit or delete them, equipment page is shown in Figure <ref type="figure" target="#fig_12">16</ref>. Inside every equipment the user can create sensors. Among the graph types, you can choose periodic, with a certain period of repetition of data ranges, normal, where there is a certain range and values that do not go beyond it, and sine wave. You can also choose the type in the drop-down menu, which is shown in Figure <ref type="figure" target="#fig_13">17</ref>. The sensors page is similar to the equipments page. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experiments</head><p>For experiments researched internet resources for checking the data that different equipment can return from different sensors. Picked up a regular graph which should return data in diapason from -2 to 2, shown in Figure <ref type="figure" target="#fig_14">18</ref>, a repeating graph that has some repeating trend where the big part has a value between -4 to 3 and the small part from -1 to 1, shown in Figure <ref type="figure" target="#fig_7">19</ref>, and sinusoid graph, shown in Figure <ref type="figure" target="#fig_8">20</ref>.</p><p>After that was imitated anomalies which is shown in Figure <ref type="figure" target="#fig_8">21</ref>. For application training data was split for train and test batches. After all training and testing, the application analyzes data, find anomalies, and show the results, anomalies were highlighted as orange dots, shown in Figure <ref type="figure" target="#fig_8">21</ref>.   The application is ready to analyze and return results for radically different graphs, which can solve the problem of previously considered studies that were prepared only for certain equipment. This application has solutions for completely different equipment, which as a result is flexible in further use at various stations using different equipment, from pumps and turbines to nuclear reactors.</p><p>If the system did not detect any anomalies, the user will be informed by the inscription that there are no anomalies in the specific equipment on all sensors, shown in Figure <ref type="figure" target="#fig_8">22</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 12: Equipment without anomalies</head><p>If anomalies were detected, the status will change to "Exist". To study in detail where exactly the anomaly occurred, the user needs to click on the equipment that he needs to examine, and after going to the page of all sensors of this equipment, analyze the sensors for the presence of the "Exist" status. It is shown in Figure <ref type="figure" target="#fig_9">23</ref>. After that, the user can go to the sensor that he needs to investigate and analyze the results of the anomaly detection in this automated system. The graph of similar results is shown in Figure <ref type="figure" target="#fig_10">24</ref>.</p><p>If anomalies were detected, they are indicated by orange dots, as shown in Figure <ref type="figure" target="#fig_10">24</ref>. If there are no anomalies, then the graph will consist only of the indicators of the current sensor of this equipment, shown in Figure <ref type="figure" target="#fig_11">25</ref>. In this way, the program can analyze quite different graphs, an example is shown in Figure <ref type="figure" target="#fig_8">21</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions</head><p>In this work, a user-friendly software application was implemented. What is convenient for keeping records of various automated station systems and a mechanism for automatic detection of anomalies in them has been implemented. The technical task was prepared and the correct tools for the implementation of the program were chosen. Analyzed different researcher works with their advantages and disadvantages. Was implemented a scalable application that can be used in different equipment which is the main novelty. The application was tested with different experiments which showed acceptable results for future use.</p><p>The main role in the detection of anomalies is played by the LSTM neural network. This work describes the process of data preparation for its use, its creation and training. To train the neural network, a proprietary dataset generator was created after analyzing various Internet resources. The process of choosing a specific neural network architecture is described. A number of algorithms for preliminary data preparation were also implemented.</p><p>Prospects for further research should be in-depth investigations of the performance of an LSTM natural network in this context. It is necessary to check the current application in various areas. It is worth conducting a study on the success of this software application using a large amount of real data from equipment indicators of automated systems.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :Figure 2 :</head><label>12</label><figDesc>Figure 1: Data before standatrisation</figDesc><graphic coords="5,162.22,86.93,306.00,138.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Structure of LSTM</figDesc><graphic coords="5,94.88,550.51,390.04,146.20" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Hyperbolic tangent function</figDesc><graphic coords="6,171.35,434.12,248.15,137.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 6 : 4 )</head><label>64</label><figDesc>Figure 6: First step of LSTM</figDesc><graphic coords="7,172.10,73.50,246.15,155.70" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: Second step of LSTM</figDesc><graphic coords="7,197.90,363.42,227.23,145.10" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 8 :</head><label>8</label><figDesc>Figure 8: Third step of LSTM</figDesc><graphic coords="7,173.60,590.56,243.38,157.25" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 9 :</head><label>9</label><figDesc>Figure 9: Fourth step of LSTM</figDesc><graphic coords="8,206.15,150.54,211.90,138.35" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Dataset example</figDesc><graphic coords="8,153.85,579.40,301.05,171.03" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: An example of periodic dataset</figDesc><graphic coords="9,140.60,269.93,311.47,163.15" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_9"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Sinusoidal dataset</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_10"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Stations page</figDesc><graphic coords="10,96.85,285.48,415.85,348.17" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_11"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Equipment type page</figDesc><graphic coords="11,176.35,415.43,256.10,222.40" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_12"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Equipment type context menu</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_13"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: Equipment page</figDesc><graphic coords="12,144.10,73.50,321.35,320.49" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_14"><head>Figure 8 :</head><label>8</label><figDesc>Figure 8: Selection of graph types</figDesc><graphic coords="12,181.60,486.82,246.50,242.87" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_15"><head>Figure 9 :Figure 19 :</head><label>919</label><figDesc>Figure 9: Regular graph</figDesc><graphic coords="13,104.35,218.43,400.19,227.35" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_16"><head>Figure 10 :</head><label>10</label><figDesc>Figure 10: Sinusoid graph</figDesc><graphic coords="14,160.10,73.50,273.15,143.08" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_17"><head>Figure 11 :</head><label>11</label><figDesc>Figure 11: Experiments results</figDesc><graphic coords="14,108.10,245.08,393.60,428.90" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_18"><head>Figure 13 :</head><label>13</label><figDesc>Figure 13: Page of sensors of this equipment with the present anomalies</figDesc><graphic coords="15,133.38,459.92,336.90,293.56" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_19"><head>Figure 14 :Figure 15 :</head><label>1415</label><figDesc>Figure 14: The page of the sensor of this equipment with the exist anomalies</figDesc><graphic coords="16,116.00,163.19,363.45,289.19" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Day-ahead forecasting of photovoltaic output power with similar cloud space fusion based on incomplete historical data mining</title>
		<author>
			<persName><forename type="first">X</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Han</surname></persName>
		</author>
		<ptr target="https://www.sciencedirect.com/science/article/abs/pii/S0306261917312564?via%3Dihub" />
	</analytic>
	<monogr>
		<title level="j">Appl. Energy</title>
		<imprint>
			<biblScope unit="volume">206</biblScope>
			<biblScope unit="page" from="683" to="696" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Predictive modeling of PV energy production: How to set up the learning task for a better prediction?</title>
		<author>
			<persName><forename type="first">M</forename><surname>Ceci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Corizzo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Fumarola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Malerba</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rashkovska</surname></persName>
		</author>
		<ptr target="https://ieeexplore.ieee.org/document/7556989" />
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. Ind. Informat</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="956" to="966" />
			<date type="published" when="2017-06">Jun. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A dual-stage advanced deep learning algorithm for long-term and long-sequence prediction for multivariate financial time series</title>
		<author>
			<persName><forename type="first">Rajni</forename><surname>Preeti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ram</forename><forename type="middle">Pal</forename><surname>Bala</surname></persName>
		</author>
		<author>
			<persName><surname>Singh</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.asoc.2022.109317</idno>
		<ptr target="https://doi.org/10.1016/j.asoc.2022.109317" />
	</analytic>
	<monogr>
		<title level="j">Applied Soft Computing</title>
		<imprint>
			<biblScope unit="volume">126</biblScope>
			<biblScope unit="page">109317</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">LSTM networks based on attention ordered neurons for gear remaining life prediction</title>
		<author>
			<persName><forename type="first">Sheng</forename><surname>Xiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yi</forename><surname>Qin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Caichao</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yangyang</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Haizhou</forename><surname>Chen</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.isatra.2020.06.023</idno>
		<ptr target="https://doi.org/10.1016/j.isatra.2020.06.023" />
	</analytic>
	<monogr>
		<title level="j">ISA Transactions</title>
		<imprint>
			<biblScope unit="volume">106</biblScope>
			<biblScope unit="page" from="343" to="354" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Long short-term memory neural network with weight amplification and its application into gear remaining useful life prediction</title>
		<author>
			<persName><forename type="first">Sheng</forename><surname>Xiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yi</forename><surname>Qin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Caichao</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yangyang</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Haizhou</forename><surname>Chen</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.engappai.2020.103587</idno>
		<ptr target="https://doi.org/10.1016/j.engappai.2020.103587" />
	</analytic>
	<monogr>
		<title level="j">Engineering Applications of Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">91</biblScope>
			<biblScope unit="page">103587</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Guided Stochastic Gradient Descent Algorithm for inconsistent datasets</title>
		<author>
			<persName><forename type="first">Anuraganand</forename><surname>Sharma</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.asoc.2018.09.038</idno>
		<ptr target="https://doi.org/10.1016/j.asoc.2018.09.038" />
	</analytic>
	<monogr>
		<title level="j">Applied Soft Computing</title>
		<imprint>
			<biblScope unit="volume">73</biblScope>
			<biblScope unit="page" from="1068" to="1080" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Orthogonal gradient descent for continual learning</title>
		<author>
			<persName><forename type="first">M</forename><surname>Farajtabar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Azizan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Li</surname></persName>
		</author>
		<ptr target="https://core.ac.uk/download/pdf/345075797.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 23rdInternational Conference on Artificial Intelligence and Statistics (AISTATS) 2020</title>
				<meeting>the 23rdInternational Conference on Artificial Intelligence and Statistics (AISTATS) 2020<address><addrLine>Palermo, Italy; PMLR</addrLine></address></meeting>
		<imprint>
			<biblScope unit="volume">108</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Stochastic Gradient/Mirror Descent: Minimax Optimality and Implicit Regularization</title>
		<author>
			<persName><forename type="first">Navid</forename><surname>Azizan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Babak</forename><surname>Hassibi</surname></persName>
		</author>
		<ptr target="https://openreview.net/pdf?id=HJf9ZhC9FX" />
		<imprint>
			<date type="published" when="2018">2018. 2019</date>
		</imprint>
	</monogr>
	<note type="report_type">ICLR</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Gradient Descent Finds Global Minima of Deep Neural Networks</title>
		<author>
			<persName><forename type="first">Simon</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jason</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Haochuan</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Liwei</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xiyu</forename><surname>Zhai</surname></persName>
		</author>
		<ptr target="http://proceedings.mlr.press/v97/du19c.html" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 36th International Conference on Machine Learning</title>
				<meeting>the 36th International Conference on Machine Learning</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">97</biblScope>
			<biblScope unit="page" from="1675" to="1685" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Gradient Descent Only Converges to Minimizers</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Simchowitz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">I</forename><surname>Jordan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Recht</surname></persName>
		</author>
		<ptr target="https://proceedings.mlr.press/v49/lee16.html" />
	</analytic>
	<monogr>
		<title level="m">29th Annual Conference on Learning Theory</title>
				<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">49</biblScope>
			<biblScope unit="page" from="1246" to="1257" />
		</imprint>
	</monogr>
	<note>Proceedings of Machine Learning Research</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Behavior Anomaly Detection in IoT Networks</title>
		<author>
			<persName><forename type="first">D</forename><surname>Soukup</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Cejka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Hynek</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-43192-1_53</idno>
	</analytic>
	<monogr>
		<title level="j">Lecture Notes on Data Engineering and Communications Technologies</title>
		<imprint>
			<biblScope unit="volume">49</biblScope>
			<biblScope unit="page" from="465" to="473" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">DenseHybrid: Hybrid Anomaly Detection for Dense Open-Set Recognition</title>
		<author>
			<persName><forename type="first">M</forename><surname>Grcić</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Bevandić</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Šegvić</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-19806-9_29</idno>
		<ptr target="https://doi.org/10.1007/978-3-031-19806-9_29" />
	</analytic>
	<monogr>
		<title level="m">Computer Vision -ECCV 2022</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">S</forename><surname>Avidan</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Brostow</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Cissé</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><forename type="middle">M</forename><surname>Farinella</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Hassner</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="volume">13685</biblScope>
		</imprint>
	</monogr>
	<note>ECCV 2022</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">A survey on anomaly detection for technical systems using LSTM networks</title>
		<author>
			<persName><forename type="first">Benjamin</forename><surname>Lindemann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Benjamin</forename><surname>Maschler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nada</forename><surname>Sahlab</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><surname>Weyrich</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.compind.2021.103498</idno>
		<ptr target="https://doi.org/10.1016/j.compind.2021.103498" />
	</analytic>
	<monogr>
		<title level="j">Computers in Industry</title>
		<imprint>
			<biblScope unit="volume">131</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Anomaly Detection Approaches for Semiconductor Manufacturing</title>
		<author>
			<persName><forename type="first">Gian</forename><surname>Antonio Susto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Matteo</forename><surname>Terzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alessandro</forename><surname>Beghi</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.promfg.2017.07.353</idno>
		<ptr target="https://doi.org/10.1016/j.promfg.2017.07.353" />
	</analytic>
	<monogr>
		<title level="j">Procedia Manufacturing</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="2018" to="2024" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Multifeature, Sparse-Based Approach for Defects Detection and Classification in Semiconductor Units</title>
		<author>
			<persName><forename type="first">M</forename><surname>Bashar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sen</forename><surname>Haddad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lina</forename><forename type="middle">J</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jieping</forename><surname>Karam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nital</forename><forename type="middle">S</forename><surname>Ye</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Martin</forename><forename type="middle">W</forename><surname>Patel</surname></persName>
		</author>
		<author>
			<persName><surname>Braun</surname></persName>
		</author>
		<idno type="DOI">10.1109/TASE.2016.2594288</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Automation Science and Engineering</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="145" to="159" />
			<date type="published" when="2018-01">2018. Jan. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">A deep learning approach for anomaly detection based on SAE and LSTM in mechanical equipment</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<idno type="DOI">10.1007/s00170-019-03557-w</idno>
		<ptr target="https://doi.org/10.1007/s00170-019-03557-w" />
	</analytic>
	<monogr>
		<title level="j">The International Journal of Advanced Manufacturing Technology</title>
		<imprint>
			<biblScope unit="volume">103</biblScope>
			<biblScope unit="page" from="499" to="510" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">A review on empirical mode decomposition in fault diagnosis of rotating machinery</title>
		<author>
			<persName><forename type="first">Yaguo</forename><surname>Lei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jing</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zhengjia</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ming</forename><forename type="middle">J</forename><surname>Zuo</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.ymssp.2012.09.015</idno>
		<ptr target="https://doi.org/10.1016/j.ymssp.2012.09.015" />
	</analytic>
	<monogr>
		<title level="j">Mechanical Systems and Signal Processing</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page" from="108" to="126" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Improved LSTM-Based Time-Series Anomaly Detection in Rail Transit Operation Environments</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Duan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wu</surname></persName>
		</author>
		<idno type="DOI">10.1109/TII.2022.3164087</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Industrial Informatics</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="9027" to="9036" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Reliable Solar Irradiance Forecasting Approach Based on Choquet Integral and Deep LSTMs</title>
		<author>
			<persName><forename type="first">M</forename><surname>Abdel-Nasser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Mahmoud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lehtonen</surname></persName>
		</author>
		<idno type="DOI">10.1109/TII.2020.2996235</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Industrial Informatics</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="1873" to="1881" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Holistic LSTM for pedestrian trajectory prediction</title>
		<author>
			<persName><forename type="first">R</forename><surname>Quan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yang</surname></persName>
		</author>
		<ptr target="https://ieeexplore.ieee.org/document/9361440" />
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. Image Process</title>
		<imprint>
			<biblScope unit="volume">30</biblScope>
			<biblScope unit="page" from="3229" to="3239" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Hybrid deep transfer learning architecture for industrial fault diagnosis using Hilbert transform and DCNN-LSTM</title>
		<author>
			<persName><forename type="first">M</forename><surname>Zabin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">J</forename><surname>Choi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Uddin</surname></persName>
		</author>
		<idno type="DOI">10.1007/s11227-022-04830-8</idno>
		<ptr target="https://doi.org/10.1007/s11227-022-04830-8" />
	</analytic>
	<monogr>
		<title level="j">J Supercomput</title>
		<imprint>
			<biblScope unit="volume">79</biblScope>
			<biblScope unit="page" from="5181" to="5200" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
