<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Deep Learning Methods Application in Finance: A Review of State of Art</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Dovilė</forename><surname>Kuizinienė</surname></persName>
							<email>dovile.kuiziniene@vdu.lt</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Applied Informatics Vytautas</orgName>
								<orgName type="institution">Magnus University Kaunas</orgName>
								<address>
									<country key="LT">Lithuania</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Tomas</forename><surname>Krilavičius</surname></persName>
							<email>tomas.krilavicius@vdu.lt</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Applied Informatics Vytautas</orgName>
								<orgName type="institution">Magnus University Kaunas</orgName>
								<address>
									<country key="LT">Lithuania</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="department">Information Society</orgName>
								<orgName type="institution">University Studies</orgName>
								<address>
									<addrLine>23 April 2020</addrLine>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<orgName type="institution">KTU</orgName>
								<address>
									<addrLine>Santaka Valley</addrLine>
									<settlement>Kaunas</settlement>
									<country key="LT">Lithuania</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Deep Learning Methods Application in Finance: A Review of State of Art</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">741580EB6DDDEF68D2A999A48CD417E6</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T11:21+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Artificial intelligence</term>
					<term>Machine Learning</term>
					<term>Deep Learning</term>
					<term>Convolution Neural Network</term>
					<term>Deep Belief Network</term>
					<term>Deep Boltzmann Machine</term>
					<term>Deep neural network</term>
					<term>Deep Q-Learning</term>
					<term>Deep reinforcement learning</term>
					<term>The extreme learning machine</term>
					<term>Generative adversarial network</term>
					<term>Recurrent Neural Learning</term>
					<term>Long short-term memory</term>
					<term>Gated Recurrent Unit</term>
					<term>Finance</term>
					<term>Financial innovations</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Artificial intelligence uses in financial markets or business units forms financial innovations. These innovations are the key indicator for economic grow and intelligent finance system formation. Recants years scientist and most innovation driving companies, such as Google, IBM, Microsoft and other, are focusing in deep learning methods. These methods have achieved significant performances in diverse areas: image recognition, natural language processing, speech recognition, video processing, etc. Therefore, it is necessary to understand the variety of deep learning methods and only then their applicability in the financial field. Accordingly, in this paper firstly is presented differences in science community already settled deep learning method's architectures. Secondly, is shown a big picture of developing scientific articles of deep learning uses in finance field, where the most used deep learning methods were identified. Finally, the conclusion, limitations and future work have been shown.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The global financial industry is quietly changing under the catalysis of artificial intelligence (AI) <ref type="bibr" target="#b0">[1]</ref>. AI represents a clear opportunity to advance the transformation of the finance industry by providing users with greater value and increasing firms' revenues <ref type="bibr" target="#b1">[2]</ref>. The goal of AI is to invent a machine which can sense, remember, learn, and recognize like a real human being <ref type="bibr" target="#b2">[3]</ref>. The deep integration of AI technology and finance is the inevitable result of deepening development and Exploring Innovation in these fields <ref type="bibr" target="#b0">[1]</ref>. These innovations have the potential to directly influence both the production and the characteristics of a wide range of products and services, with important implications for productivity, employment, and competition <ref type="bibr" target="#b3">[4]</ref>. AI also improve work efficiency at the business and create a whole process of intelligent finance <ref type="bibr" target="#b0">[1]</ref>. Applications of AI systems are generally viewed as positive for economic growth and productivity <ref type="bibr" target="#b1">[2]</ref>. Deep learning is a recently-developed field belonging to Artificial Intelligence <ref type="bibr" target="#b2">[3]</ref>. It attempts to learn hierarchical representations from raw data and is capable of learning simple concepts first and then successfully build- ing up more complex concepts by merging the simpler ones <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b6">7]</ref>. Companies such as Google, Facebook, IBM, Microsoft and others use this algorithm for developing next-generation intelligent applications <ref type="bibr" target="#b7">[8]</ref>. In finances there are two major problems:1) to predict future returns (i.e., stock prices, currencies, indices, product demand); or 2) to make categorical classification (i.e. credit scoring ("good, "bad"), bankruptcy ("True", "False")). While the issues in finances remain almost the same over the last several decades, novel methods, and growing amount of data are changing the field, especially Machine Learning and Artificial Intelligence techniques <ref type="bibr" target="#b8">[9]</ref>. Furthermore, exploitation of additional data sources allows to achieve better results, e.g. satellite images can be used for predicting economic activity, voice information provides information about emotions, textual information, extracted from news and comments gives sentiments of writers and audience, etc <ref type="bibr" target="#b9">[10]</ref>. However, extraction of useful knowledge out of such data heap is not trivial, it requires considerable effort <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12]</ref>. Portfolio management tasks have more challenges, because there are two main issues with portfolio formation: (1) selection of assets with highest revenue, and (2) determination other value composition of assets in the portfolio to achieve the goal of maximal potential returns with minimal risk <ref type="bibr" target="#b12">[13]</ref>. Therefore, this paper is divided in two parts: 1) different deep learning architectures are discussed; 2) application of the aforementioned methods in finances is discussed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Literature review</head><p>The term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving" <ref type="bibr" target="#b13">[14]</ref>. In other words, it tries to mimic the human brain, which is capable of processing the complex input data, learning different knowledges intellectually and fast, and solving different kinds of complicated tasks well <ref type="bibr" target="#b2">[3]</ref>.</p><p>AI has been part of human thoughts and slowly evolving in academic research labs <ref type="bibr" target="#b13">[14]</ref>. Machine learning is the subset of AI. Machine learning is the study of computer algorithms that can be improved automatically through experience <ref type="bibr" target="#b0">[1]</ref>. Machine learning algorithms overcome following strictly static program instructions by making data-driven predictions or decisions, through building a model from sample inputs <ref type="bibr" target="#b13">[14]</ref>. In machine learning, artificial neural networks are a family of models that mimic the structural elegance of the neural system and learn patterns inherent in observations <ref type="bibr" target="#b14">[15]</ref>, see Fig. <ref type="figure" target="#fig_1">1</ref>. The term "deep" refers to the number of layers in the network-the more layers, the deeper the network <ref type="bibr" target="#b15">[16]</ref>. Traditional neural networks contain only 2 or 3 layers, while deep networks can have hundreds <ref type="bibr" target="#b15">[16]</ref>. Deep learning has been explosively developed today. Compared with shallow learning, deep learning reaches the state of arts in many researches <ref type="bibr" target="#b16">[17]</ref>.</p><p>In contrast to the shallow architectures like kernel machines which only contain a fixed feature layer (or base function) and a weight-combination layer (usually linear), deep architectures refers to the multi-layer network where each two adjacent layers are connected to each other in some way <ref type="bibr" target="#b2">[3]</ref>. This introduces the unprecedented flexibility to model even highly complex, non-linear relationships between predictor and outcome variables, a quality that has allowed deep neural networks to outperform models from traditional machine learning in a variety of tasks <ref type="bibr" target="#b17">[18]</ref>. Deep learning methods have only now become so powerful, due to technical reasons of: computational power (hardware), availability of large datasets and optimization algorithms <ref type="bibr" target="#b17">[18]</ref>, <ref type="bibr" target="#b18">[19]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Convolution Neural Network</head><p>Convolution neural network (CNN) algorithm is separated into two main parts: feature detection and classification (see Fig. <ref type="figure" target="#fig_2">2</ref>).</p><p>Feature detection phase consist from convolution, pooling and rectified linear unit (ReLU) layers. Convolutional filters activates certain features from data set unit (image, video, time series). This layer produces huge amount of features that makes overfitting problems and expensive computation <ref type="bibr" target="#b7">[8]</ref>. Pooling leayers reducses this problem by aggregating multiple feature values into a single value. Max-pooling is mostly used pooling operation, in Keras instead of this operation could be used Average-pooling, Global-max-pooling or Global-average-pooling operations <ref type="bibr" target="#b19">[20]</ref>. Rectified linear unit (ReLU) is an activation function meant to zero out negative values, whereas a sigmoid "squashes" arbitrary values into the interval [0, 1] producing something that can be interpreted as a probability <ref type="bibr" target="#b18">[19]</ref>.</p><p>These three operations are repeated over tens or hundreds of layers, with each layer learning to detect different features <ref type="bibr" target="#b15">[16]</ref>. The classification phase consists from two layers dropout and fully connected. Dropout consists of randomly dropping out (setting to zero) a number of output features of the layer during training <ref type="bibr" target="#b18">[19]</ref>.</p><p>The fully connected layer that produces a vector of K dimensions where K is the number of classes that the network will be able to predict. This vector contains the probabilities for each class of any image being classified <ref type="bibr" target="#b15">[16,</ref><ref type="bibr" target="#b20">21]</ref>. The quality of model is evaluated by the cost function in fully connected layer (sigmoid, softmax or other).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Deep Belief Network</head><p>The power of Deep Belief Network (DBN) (Fig. <ref type="figure" target="#fig_3">3</ref> and Fig. <ref type="figure" target="#fig_4">4</ref>) lies in their ability to reconstruct both the input vector and the learning feature vectors, which is implemented using a layer-by-layer learning strategy <ref type="bibr" target="#b21">[22]</ref>. Each layer of a DBN consists of a Restricted Boltzmann Machines (RBM). RBMs follow the principle of the probability distribution to complete its learning cycle <ref type="bibr" target="#b22">[23]</ref>. Each RBM is concluded from a visible layer (v) and a hidden layer (h). Number of neurons is set up in each layer. The neurons between different layers are fully connected, and the neurons in the same layer are not connected <ref type="bibr" target="#b22">[23]</ref>. When an RBM has learned, its feature activations are used as the "data" for training the next RBM in the DBNs <ref type="bibr" target="#b23">[24]</ref>. RBMs is as an unsupervised network which considers the visible layer to the hidden layer as a subnetwork. Then, this hidden layer is considered as a visible layer to the next layer and so on <ref type="bibr" target="#b23">[24]</ref>. The higher-level features are learned from the previous layers and the higher-level features are believed to be more complicated and better reflects the information contained in the input data's structures <ref type="bibr" target="#b2">[3]</ref>. DBN training is divided into two steps: forward pre-training process and reverse fine-tuning process <ref type="bibr" target="#b24">[25]</ref>. During the pre-training phase, the RBMs are trained one-by-one until the hidden layer of the last RBM. During this phase, the parameter of each RBM can be obtained <ref type="bibr" target="#b22">[23]</ref>. The back-propagation network (BP) is set in the last hidden layer of the DBN <ref type="bibr" target="#b24">[25]</ref>. BP is applied to fine tune the parameter using the output labels of the sample data <ref type="bibr" target="#b22">[23]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Deep Boltzmann Machine</head><p>Deep Boltzmann Machine (DBM) have only one undirected network <ref type="bibr" target="#b23">[24]</ref>. DBM as DBN is comprised of a Restricted Boltzmann Machines (RBM). The main difference is related to the interaction among layers of RBMs <ref type="bibr" target="#b24">[25]</ref>. For the computation of the conditional probability of the hidden units h1, both the lower visible layer v and the upper hidden layer h2 are incorporated, that makes DBM differentiated from DBN and also more robust to noisy observation <ref type="bibr" target="#b14">[15]</ref>. There are no direct connections between the units in the same layers. DBM parameters of all layers can be optimized jointly by following the approximate gradient of a variational lower-bound on the likelihood objective <ref type="bibr" target="#b25">[26]</ref>.</p><p>Different from the DBN, the DBM can incorporate top-down feedback, which can better propagate uncertainty and hence deal with ambiguous input more robustly <ref type="bibr" target="#b26">[27]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4.">Deep Neural Network</head><p>Due to the novelty of the concept Deep neural network (DNN) (Fig. <ref type="figure">5</ref>)in the scientific literature can be identified for all the algorithms analyzed in this paper. However, in recent years the concept of DNN has become known as Artificial Neural Network (ANN) with hidden layers <ref type="bibr" target="#b8">[9]</ref>  <ref type="bibr" target="#b27">[28]</ref>. DNN typically is feedforward network so it can be understood as the Multilayer Perceptron (MLP or MP). MLP consists of an input layer, several hidden layers and one output layer ant it's widely used for pattern classification, recognition and prediction <ref type="bibr" target="#b28">[29]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.5.">Deep Q-Learning or Deep Reinforcement Learning</head><p>Deep Q-Learning (DQL) or Deep reinforcement learning (DRL) concept is replaceable in scientific literature <ref type="bibr" target="#b5">(6)</ref>. In DQL is always used reinforcement learning algorithm, or in DRL is often used Q-learning function, because of it is dealing with high-dimensional state space inputs <ref type="bibr" target="#b29">[30]</ref>, <ref type="bibr" target="#b30">[31]</ref>. A reinforcement learning (RL) process involves an agent learning from interactions with its environment in discrete time steps in order to update its mapping between the perceived state and a probability of selecting possible actions (policy) <ref type="bibr" target="#b31">[32]</ref>.</p><p>In other words, RL is commonly used to solve an sequential decision making problem <ref type="bibr" target="#b29">[30]</ref>. The RL problem is normally formalized using the Markov decision process (MDP) and includes a set of states S, set of actions A, transition function T as action distributions, reward function R and discount factor 𝛾 <ref type="bibr" target="#b32">[33]</ref>. The solution to the MDP is a policy 𝜋 : S → A and the policy should maximize the expected discounted cumulative reward <ref type="bibr" target="#b29">[30]</ref>. Q-learning, as a typical reinforcement learning approach, mimics human behaviors to take actions to the environment, in order to obtain the maximum long-term rewards <ref type="bibr" target="#b33">[34]</ref>. The DQL process can be viewed as iteratively optimizing network parameters process according to gradient direction of the loss function at each stage <ref type="bibr" target="#b34">[35]</ref>. Therefore, the inexact approximate gradient estimation with a large variance can largely deteriorate the representation performance of deep Q network by driving the network parameter deviated from the optimal setting, causing the large variability of DQL performance <ref type="bibr" target="#b34">[35]</ref>. The advantages of deep Q-learning is good results and ease of use (code can be modified easy for different physical problems) <ref type="bibr" target="#b35">[36]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.6.">The Extreme Learning Machine</head><p>The extreme learning machine (ELM) is a single-hidden layer feedforward network, proposed by Huang in 2012.</p><p>In the traditional feed-forward ANN, the training of the network is iterative, while the process is transformed into an analytical equation in the ELM <ref type="bibr" target="#b36">[37]</ref>.</p><p>In ELM the weights between input and hidden layer are assigned randomly following a normal distribution and the weights between hidden and output layers are learnt in single step by a pseudo-inverse technique.</p><p>During the training, the hidden layer is not learned but the weight matrix of output layer is obtained by solving the optimization problem formulated by some learning criteria and regularizations <ref type="bibr" target="#b37">[38]</ref>, as showed in the theory the output weights solved from regular-  ized least squares problem <ref type="bibr" target="#b38">[39]</ref>. Therefore, ELM offers benefits such as fast learning speed, ease of implementation, and less human intervention when compared to the standard neural networks <ref type="bibr" target="#b39">[40]</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.7.">Generative Adversarial Network</head><p>The general idea of Generative adversarial network (GAN) is that it aims to train a generator to reconstruct high-resolution images for fooling a discriminator that is trained to distinguish generative images from real ones <ref type="bibr" target="#b40">[41]</ref> (Fig. <ref type="figure">8</ref>). This idea involves two competing neural network models: one of them takes noise as input and produces some samples (generator) and the other model (discriminator) accepts both the data outputted by the generator and the real data, meanwhile, separates their sources <ref type="bibr" target="#b41">[42]</ref>. The Discriminator trains itself to discriminate real data and generated data better while the Generator trains itself to fit the real data distribution so as to fool Discriminator <ref type="bibr" target="#b42">[43]</ref>. These two neural networks are trained at the </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.8.">Recurrent Neural Learning</head><p>Recurrent Neural Learning (RNN) (Fig. <ref type="figure">9</ref>) is different from the traditional feedforward neural networks, because have feedback connections, which can be between hidden units or from the output to the hidden units <ref type="bibr" target="#b43">[44,</ref><ref type="bibr" target="#b44">45]</ref>. This connections address the temporal relationship of inputs by maintaining internal states that have memory . An RNN is able to process the sequential inputs by having a recurrent hidden state whose activation at each step depends on that of the previous step <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b45">46]</ref>. In other words, RNN not only processes the current element in the sequence, but also draws upon the hidden layer of the previous element in the sequence <ref type="bibr" target="#b17">[18]</ref>. For example, the states produced by an RNN at time t-1 will have some impacts on the states produced by the RNN at time t <ref type="bibr" target="#b16">[17]</ref>. Hidden units can be regarded as the storage of the whole network, which remember the end-to-end information <ref type="bibr" target="#b46">[47]</ref>. However, it has been observed that it is difficult to train the RNNs to deal with long-term sequential data, as the gradients tend to vanish <ref type="bibr" target="#b4">[5]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.9.">Long short-term memory</head><p>Long short-term memory (LSTM) (Fig. <ref type="figure" target="#fig_1">10</ref>) in literature is called one of the classes <ref type="bibr" target="#b12">[13]</ref>, advanced or extension <ref type="bibr" target="#b47">[48]</ref> of RNN. The main advantage of LSTM is capability to learn longer dependencies in data <ref type="bibr" target="#b48">[49]</ref> compared with RNN. Information sequentially is processed in LSTM, but there is a memory cell, which remembers and forgets information <ref type="bibr" target="#b47">[48]</ref>. In each memory cell is three multiplication units: input gate, output gate and forget gate, which controls the flow of information <ref type="bibr" target="#b49">[50]</ref>. The input gate determines how much cur-  rent information should be treated as input in order to generate the current state <ref type="bibr" target="#b50">[51]</ref>, whilst the forget gate determine which information to be forgotten from the memory state <ref type="bibr" target="#b51">[52]</ref>. Finally, the output gate filters the information that can be actually treated as significant and produces the output <ref type="bibr" target="#b51">[52]</ref>. The "gate" structure is implemented using the sigmoid function, which denotes how much information can be allowed to pass. For one hidden layer in LSTM, activation function is used in forward propagation, and gradient is used in backward propagation <ref type="bibr" target="#b37">[38]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.10.">Gated recurrent unit</head><p>Gated Recurrent Unit (GRU) is aimed to solve the vanishing gradient problem which comes with a standard RNN <ref type="bibr" target="#b52">[53]</ref>. GRU consists of two gates: update gate (zt) and reset gate (rt). Update gate decides how much the unit updates its activation, or content, and reset gate allows forget the previously computed state <ref type="bibr" target="#b53">[54]</ref>. GRU is a less complex compared with LSTM, it does not possess any internal memory and output gate like LSTM <ref type="bibr" target="#b48">[49]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Application of Deep Learning Methods</head><p>Articles were included from electronic libraries: Science Direct, IEEE, Scopus, ACM, Emerald, Springer-Link, JSTOR, EBSCO and others. Analyzed period started from 2017 till 2020. The review was conducted in January 2020. Keywords "Deep learning" and "Finance" were used for the article's selection. All methods presented in this review matches a term "Deep learning", wherefore individually search by each method was not developed. The same with a term "Finance", which includes accounting, financial markets, risks and etc. Therefore, this paper presents a big picture of developing scientific articles in Deep learning in Finance category. 33 papers were selected and analyzed. The analyzed articles can be categorized by the problematic of given task: to predict future returns or two make classification of results. Sometimes, for better results are used natural language processing algorithms (Fig. <ref type="figure" target="#fig_9">12</ref> and Tab 1).</p><p>The classification algorithms in finance most often have been applied for credit scoring, which divides loans into "good" and "bad". For this problem solving author's used DBN <ref type="bibr" target="#b28">[29]</ref>, modified LSTM <ref type="bibr" target="#b51">[52]</ref> and CNN <ref type="bibr" target="#b54">[55]</ref> networks. The results cannot be compared due to different classifier evaluation methods used and data source differences. In credit scoring topic is a big problem with unbalanced data set, i.e. authors <ref type="bibr" target="#b54">[55]</ref> used data set, where credit worthy instances were 91,55 proc. and CNN accuracy rate was 91,64 proc. In the bankruptcy and investment market structure was used CNN network or in tax evasion DQL network. Articles in financial field is interested to obtain knowledge from words and used it as some indicators. Therefore, is seen a trend to use natural language processing techniques. The goal of natural language processing (NLP) is to process text using the computational linguistics, text analysis, machine learning, statistical and linguistics knowledge in order to analyze and extract significant information <ref type="bibr" target="#b55">[56]</ref>. Researches in financial field are using sentiment analysis for better stock price prediction or bankruptcy classification. Sentiment analysis is the essential task for NLP, which can be divided into three categories: lexicon-based sentiment analysis, machine learning-based sentiment analysis and the hybrid approach <ref type="bibr" target="#b55">[56]</ref>.</p><p>Lexicon-based sentiment analysis was used only in one article <ref type="bibr" target="#b10">[11]</ref>, due to the need of opinion lexicon in this field. Machine learning-based sentiment analysis uses in bag-of-words method <ref type="bibr" target="#b47">[48]</ref>, <ref type="bibr" target="#b56">[57]</ref> and word embeddings <ref type="bibr" target="#b47">[48,</ref><ref type="bibr" target="#b57">58,</ref><ref type="bibr" target="#b56">57,</ref><ref type="bibr" target="#b9">10]</ref> with CNN <ref type="bibr" target="#b57">[58,</ref><ref type="bibr" target="#b56">57]</ref>, LSTM <ref type="bibr" target="#b47">[48,</ref><ref type="bibr" target="#b9">10]</ref> methods.</p><p>In <ref type="bibr" target="#b3">[4]</ref> research was used bag of words and word embeddings methods for LSTM, results showed that LSTM models can outperform all traditional machine learning models based on the bag-of-words approach, especially when further pre-train word embeddings with transfer learning. The main financial article's focus is in future returns prediction, especially in stock prices or stock indexes. The main reason is data source availability for scientific research. In this field very often, scientific researches combine different methods  together <ref type="bibr" target="#b48">[49,</ref><ref type="bibr" target="#b58">59,</ref><ref type="bibr" target="#b59">60]</ref> or make some model modification's <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b49">50,</ref><ref type="bibr" target="#b60">61,</ref><ref type="bibr" target="#b61">62]</ref> for better prediction results. Some authors <ref type="bibr" target="#b47">[48,</ref><ref type="bibr" target="#b62">63]</ref> analyze several different deep learning models results for the deeper future model development, see Fig. <ref type="figure" target="#fig_10">13</ref>.</p><p>Most popular methods are CNN and LSTM. However, DBM and GAN method's was not found any adjustment in finance field.</p><p>In some papers data is not normalized, i.e. cryptocurrency prices <ref type="bibr" target="#b50">[51]</ref> or demand <ref type="bibr" target="#b17">[18]</ref>. Therefore, predictive accuracy measurements, such as RMSE, MPE and others, can be comparable with different other authors works or sometimes even in the same paper, i.e. RMSE for Bitcoin is 2.75×103 or for Ripple 0.0499 <ref type="bibr" target="#b50">[51]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusions</head><p>learning machine, generative adversarial network, recurrent neural learning, long short-term memory, gated recurrent unit; and it's applicability in finance field. This review reveals that financial article's:</p><p>1. mainly focus for the forecasting task than classification; 2. starts using natural language processing techniques, mostly sentiment analysis, for better results prediction; 3. uses not 'basic' the deep learning methods, i.e.</p><p>they are often combined with several different models or merged to voting classifier.</p><p>Furthermore, this analysis has shown the importance of balanced data set and normalization of the data, which is submitted to deep learning networks.</p><p>The main limitation of this work is representation only a big picture of developing scientific articles in Deep learning in Finance category. Therefore, in future research is needed to extend search keyword's in electronic libraries, i.e. search by each method</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>//ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: The connection of AI, ML and DL</figDesc><graphic coords="3,74.41,70.16,218.25,104.86" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Convolution neural network architecture.</figDesc><graphic coords="4,74.41,70.16,446.47,150.11" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Deep Belief Network architecture.</figDesc><graphic coords="4,74.41,265.82,218.25,140.35" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Deep Belief Network architecture</figDesc><graphic coords="4,302.62,265.82,218.25,200.06" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 5 :Figure 6 :</head><label>56</label><figDesc>Figure 5: Deep neural network architecture</figDesc><graphic coords="5,74.41,70.16,218.25,174.60" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 7 :Figure 8 :</head><label>78</label><figDesc>Figure 7: The extreme learning machine architecture</figDesc><graphic coords="5,74.41,282.34,218.25,297.09" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 9 :Figure 10 :</head><label>910</label><figDesc>Figure 9: Recurrent Neural Learning architecture</figDesc><graphic coords="6,74.41,75.01,446.48,112.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 11 :</head><label>11</label><figDesc>Figure 11: The extreme learning machine architecture</figDesc><graphic coords="6,74.41,465.39,446.45,205.92" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_9"><head>Figure 12 :</head><label>12</label><figDesc>Figure 12: Categorical classification of analyzed articles</figDesc><graphic coords="7,302.62,70.16,218.25,164.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_10"><head>Figure 13 :</head><label>13</label><figDesc>Figure 13: Use of deep learning methods in financial context.</figDesc><graphic coords="8,74.41,333.08,218.25,115.09" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Detailed topics from Finance perspective</figDesc><table /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Financial innovation based on artificial intelligence technologies</title>
		<author>
			<persName><forename type="first">C</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 International Conference on Artificial Intelligence and Computer Science</title>
				<meeting>the 2019 International Conference on Artificial Intelligence and Computer Science</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="750" to="754" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Artificial intelligence: accelerator or panacea for financial crime?</title>
		<author>
			<persName><forename type="first">P</forename><surname>Yeoh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Financial Crime</title>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">A survey on deep learning: one small step toward ai</title>
		<author>
			<persName><forename type="first">D</forename><surname>Mo</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2012">2012</date>
			<pubPlace>USA</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Dept. Computer Science, Univ. of New Mexico</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">M</forename><surname>Cockburn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Henderson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Stern</surname></persName>
		</author>
		<title level="m">The impact of artificial intelligence on innovation</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
		<respStmt>
			<orgName>National bureau of economic research</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Deep recurrent neural networks for hyperspectral image classification</title>
		<author>
			<persName><forename type="first">L</forename><surname>Mou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Ghamisi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><forename type="middle">X</forename><surname>Zhu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Geoscience and Remote Sensing</title>
		<imprint>
			<biblScope unit="volume">55</biblScope>
			<biblScope unit="page" from="3639" to="3655" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A novel training method to preserve generalization of rbpnn classifiers applied to ecg signals diagnosis</title>
		<author>
			<persName><forename type="first">F</forename><surname>Beritelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Capizzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Lo Sciuto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Napoli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Woźniak</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Networks</title>
		<imprint>
			<biblScope unit="volume">108</biblScope>
			<biblScope unit="page" from="331" to="338" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Rainfall estimation based on the intensity of the received signal in a lte/4g mobile terminal by using a probabilistic neural network</title>
		<author>
			<persName><forename type="first">F</forename><surname>Beritelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Capizzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Lo Sciuto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Napoli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Scaglione</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="30865" to="30873" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Advanced Machine Learning with Python</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hearty</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2016">2016</date>
			<publisher>Packt Publishing Ltd</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">A hierarchical deep neural network design for stock returns prediction</title>
		<author>
			<persName><forename type="first">O</forename><surname>Lachiheb</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Gouider</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Procedia Computer Science</title>
		<imprint>
			<biblScope unit="volume">126</biblScope>
			<biblScope unit="page" from="264" to="272" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A method of sentiment polarity identification in financial news using deep learning</title>
		<author>
			<persName><forename type="first">D</forename><surname>Katayama</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Kino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Tsuda</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Procedia Computer Science</title>
		<imprint>
			<biblScope unit="volume">159</biblScope>
			<biblScope unit="page" from="1287" to="1294" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Deep learning for financial sentiment analysis on finance news providers</title>
		<author>
			<persName><forename type="first">M.-Y</forename><surname>Day</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C.-C</forename><surname>Lee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2016">2016. 2016</date>
			<biblScope unit="page" from="1127" to="1134" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Small lung nodules detection based on fuzzy-logic and probabilistic neural network with bioinspired reinforcement learning</title>
		<author>
			<persName><forename type="first">G</forename><surname>Capizzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Lo Sciuto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Napoli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Polap</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wozniak</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Fuzzy Systems</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Portfolio formation with preselection using deep learning from long-term financial data</title>
		<author>
			<persName><forename type="first">Wuyu</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><surname>Li</surname></persName>
		</author>
		<author>
			<persName><surname>Weizi</surname></persName>
		</author>
		<author>
			<persName><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><surname>Ning</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kecheng</forename><surname>Liu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Expert Systems with Applications</title>
		<imprint>
			<biblScope unit="volume">143</biblScope>
			<biblScope unit="page" from="11" to="42" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Artificial intelligence, machine learning and deep learning</title>
		<author>
			<persName><forename type="first">Pariwat</forename><surname>Ongsulee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2017 15th International Conference on ICT and Knowledge Engineering (ICT&amp;KE), IEEE</title>
				<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">An introduction to neural networks and deep learning</title>
		<author>
			<persName><forename type="first">Heung-Il</forename><surname>Suk</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Deep Learning for Medical Image Analysis</title>
				<imprint>
			<publisher>Elsevier</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="3" to="24" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<author>
			<persName><surname>Mathworks</surname></persName>
		</author>
		<title level="m">Introducing Deep Learning with MATLAB</title>
				<imprint>
			<publisher>MATHWORKS</publisher>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">A novel rnn based load modelling method with measurement data in active distribution system</title>
		<author>
			<persName><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><surname>Chao</surname></persName>
		</author>
		<author>
			<persName><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><surname>Shaorong</surname></persName>
		</author>
		<author>
			<persName><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><surname>Yilu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Chengxi</forename><surname>Liuand</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Electric Power Systems Research</title>
		<imprint>
			<biblScope unit="volume">166</biblScope>
			<biblScope unit="page" from="112" to="124" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Deep learning in business analytics and operations research: Models, applications and managerial implications</title>
		<author>
			<persName><forename type="first">Mathias</forename><surname>Kraus</surname></persName>
		</author>
		<author>
			<persName><surname>Feuerriegel</surname></persName>
		</author>
		<author>
			<persName><surname>Stefan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Asil</forename><surname>Oztekin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">European Journal of Operational Research</title>
		<imprint>
			<biblScope unit="volume">281</biblScope>
			<biblScope unit="page" from="628" to="641" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<author>
			<persName><forename type="first">Francois</forename><surname>Chollet</surname></persName>
		</author>
		<title level="m">Deep Learning mit Python und Keras: Das Praxis-Handbuch vom Entwickler der Keras-Bibliothek</title>
				<imprint>
			<publisher>MITP-Verlags GmbH &amp; Co. KG</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Guide to the Sequential model -Keras Documentation</title>
		<author>
			<persName><surname>Keras</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">keras</title>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Cascade feed forward neural network-based model for air pollutants evaluation of single monitoring stations in urban areas</title>
		<author>
			<persName><forename type="first">G</forename><surname>Capizzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">L</forename><surname>Sciuto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Monforte</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Napoli</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Electronics and Telecommunications</title>
		<imprint>
			<biblScope unit="volume">61</biblScope>
			<biblScope unit="page" from="327" to="332" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Deep belief networks-based framework for malware detection in android systems</title>
		<author>
			<persName><forename type="first">D</forename><surname>Saif</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>El-Gokhy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Sallam</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Alexandria engineering journal</title>
		<imprint>
			<biblScope unit="volume">57</biblScope>
			<biblScope unit="page" from="4049" to="4057" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<author>
			<persName><surname>Balakrishnan</surname></persName>
		</author>
		<author>
			<persName><surname>Nagaraj</surname></persName>
		</author>
		<author>
			<persName><surname>Rajendran</surname></persName>
		</author>
		<author>
			<persName><surname>Arunkumar</surname></persName>
		</author>
		<author>
			<persName><surname>Pelusi</surname></persName>
		</author>
		<author>
			<persName><surname>Danilo</surname></persName>
		</author>
		<author>
			<persName><surname>Ponnusamy</surname></persName>
		</author>
		<author>
			<persName><surname>Vijayakumar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Deep belief network enhanced intrusion detection system to prevent security breach in the internet of things</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page">100112</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Unsupervised deep learning: A short review</title>
		<author>
			<persName><forename type="first">J</forename><surname>Karhunen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Raiko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Cho</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Independent Component Analysis and Learning Machines</title>
				<imprint>
			<publisher>Elsevier</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="125" to="142" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Empirical mode decomposition based multi-objective deep belief network for short-term power load forecasting</title>
		<author>
			<persName><forename type="first">Chaodong</forename><surname>Fan</surname></persName>
		</author>
		<author>
			<persName><surname>Ding</surname></persName>
		</author>
		<author>
			<persName><surname>Changkun</surname></persName>
		</author>
		<author>
			<persName><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><surname>Jinhua</surname></persName>
		</author>
		<author>
			<persName><surname>Xiao</surname></persName>
		</author>
		<author>
			<persName><surname>Leyi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zhaoyang</forename><surname>Ai</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="volume">388</biblScope>
			<biblScope unit="page" from="110" to="123" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Multimodal learning with deep boltzmann machines</title>
		<author>
			<persName><forename type="first">N</forename><surname>Srivastava</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">R</forename><surname>Salakhutdinov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in neural information processing systems</title>
				<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="2222" to="2230" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Emotion recognition from thermal infrared images using deep boltzmann machine</title>
		<author>
			<persName><forename type="first">S</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Ji</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Frontiers of Computer Science</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="609" to="618" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Comparing of deep neural networks and extreme learning machines based on growing and pruning approach</title>
		<author>
			<persName><forename type="first">K</forename><surname>Akyol</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Expert Systems with Applications</title>
		<imprint>
			<biblScope unit="volume">140</biblScope>
			<biblScope unit="page">112875</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">A deep learning approach for credit scoring using credit default swaps</title>
		<author>
			<persName><forename type="first">C</forename><surname>Luo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Wu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Engineering Applications of Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">65</biblScope>
			<biblScope unit="page" from="465" to="470" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Deep q-learning to preserve connectivity in multi-robot systems</title>
		<author>
			<persName><forename type="first">W</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Yi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 9th International Conference on Signal Processing Systems</title>
				<meeting>the 9th International Conference on Signal Processing Systems</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="45" to="50" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">A reinforcement learningbased qam/psk symbol synchronizer</title>
		<author>
			<persName><forename type="first">Cardarilli</forename><surname>Matta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Nunzio</surname></persName>
		</author>
		<author>
			<persName><surname>Fazzolari</surname></persName>
		</author>
		<author>
			<persName><surname>Giardino</surname></persName>
		</author>
		<author>
			<persName><surname>Nannarelli</surname></persName>
		</author>
		<author>
			<persName><surname>Re</surname></persName>
		</author>
		<author>
			<persName><surname>Spano</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Ieee Access</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="124147" to="124157" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Attention-based experience replay in deep q-learning</title>
		<author>
			<persName><forename type="first">M</forename><surname>Ramicic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bonarini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 9th International Conference on Machine Learning and Computing</title>
				<meeting>the 9th International Conference on Machine Learning and Computing</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="476" to="481" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Automatic collision avoidance of multiple ships based on deep q-learning</title>
		<author>
			<persName><forename type="first">H</forename><surname>Shen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hashimoto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Matsuda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Taniguchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Terada</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Guo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied Ocean Research</title>
		<imprint>
			<biblScope unit="volume">86</biblScope>
			<biblScope unit="page" from="268" to="288" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Blockchain-based distributed software-defined vehicular networks via deep q-learning</title>
		<author>
			<persName><forename type="first">C</forename><surname>Qiu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">R</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Yao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Zhao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 8th ACM Symposium on Design and Analysis of Intelligent Vehicular Networks and Applications</title>
				<meeting>the 8th ACM Symposium on Design and Analysis of Intelligent Vehicular Networks and Applications</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="8" to="14" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<monogr>
		<author>
			<persName><forename type="first">W.-Y</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X.-Y</forename><surname>Guan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Peng</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1905.08152</idno>
		<title level="m">Stochastic variance reduction for deep qlearning</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">Design of high transmission color filters for solar cells directed by deep q-learning</title>
		<author>
			<persName><forename type="first">I</forename><surname>Sajedian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Rho</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Solar Energy</title>
		<imprint>
			<biblScope unit="volume">195</biblScope>
			<biblScope unit="page" from="670" to="676" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">Discrimination of 𝛽-thalassemia and iron deficiency anemia through extreme learning machine and regularized extreme learning machine based decision support system</title>
		<author>
			<persName><forename type="first">B</forename><surname>Çil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Ayyıldız</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Tuncer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Medical Hypotheses</title>
		<imprint>
			<biblScope unit="volume">138</biblScope>
			<biblScope unit="page">109611</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">Unsupervised feature selection based extreme learning machine for clustering</title>
		<author>
			<persName><forename type="first">J</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zeng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G.-B</forename><surname>Huang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="volume">386</biblScope>
			<biblScope unit="page" from="198" to="207" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">Evolutionary extreme learning machine with sparse cost matrix for imbalanced learning</title>
		<author>
			<persName><forename type="first">H</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L.-Y</forename><surname>Hao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T.-L</forename><surname>Zhang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ISA transactions</title>
		<imprint>
			<biblScope unit="volume">100</biblScope>
			<biblScope unit="page" from="198" to="209" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<analytic>
		<title level="a" type="main">Online sequential class-specific extreme learning machine for binary imbalanced learning</title>
		<author>
			<persName><forename type="first">S</forename><surname>Shukla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">S</forename><surname>Raghuwanshi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Networks</title>
		<imprint>
			<biblScope unit="volume">119</biblScope>
			<biblScope unit="page" from="235" to="248" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b40">
	<analytic>
		<title level="a" type="main">Learning spectral and spatial features based on generative adversarial network for hyperspectral image super-resolution</title>
		<author>
			<persName><forename type="first">R</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Meng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="3161" to="3164" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b41">
	<analytic>
		<title level="a" type="main">Colorless video rendering system via generative adversarial networks</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Cui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA), IEEE</title>
				<imprint>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="page" from="464" to="467" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b42">
	<analytic>
		<title level="a" type="main">Identity-preserving conditional generative adversarial network</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhai</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2018 International Joint Conference on Neural Networks (IJCNN), IEEE</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="1" to="5" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b43">
	<monogr>
		<author>
			<persName><forename type="first">I</forename><surname>Goodfellow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Courville</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
		<title level="m">Deep learning</title>
				<imprint>
			<publisher>MIT press Cambridge</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">1</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b44">
	<analytic>
		<title level="a" type="main">Some remarks on the application of rnn and prnn for the chargedischarge simulation of advanced lithium-ions battery energy storage</title>
		<author>
			<persName><forename type="first">F</forename><surname>Bonanno</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Capizzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Napoli</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Symposium on Power Electronics Power Electronics, Electrical Drives, Automation and Motion</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="941" to="945" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b45">
	<analytic>
		<title level="a" type="main">Comparative analysis of recurrent and finite impulse response neural networks in time series prediction</title>
		<author>
			<persName><forename type="first">M</forename><surname>Miljanovic</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Indian Journal of Computer Science and Engineering</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="180" to="191" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b46">
	<analytic>
		<title level="a" type="main">A deep learning approach for intrusion detection using recurrent neural networks</title>
		<author>
			<persName><forename type="first">C</forename><surname>Yin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Fei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>He</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Ieee Access</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="21954" to="21961" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b47">
	<analytic>
		<title level="a" type="main">Decision support from financial disclosures with deep neural networks and transfer learning</title>
		<author>
			<persName><forename type="first">M</forename><surname>Kraus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Feuerriegel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Decision Support Systems</title>
		<imprint>
			<biblScope unit="volume">104</biblScope>
			<biblScope unit="page" from="38" to="48" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b48">
	<analytic>
		<title level="a" type="main">Applicability of deep learning models for stock price forecasting an empirical study on bankex data</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">J</forename><surname>Balaji</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">H</forename><surname>Ram</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">B</forename><surname>Nair</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Procedia computer science</title>
		<imprint>
			<biblScope unit="volume">143</biblScope>
			<biblScope unit="page" from="947" to="953" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b49">
	<analytic>
		<title level="a" type="main">Forecasting crude oil prices: a deep learning based model</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">K</forename><surname>Tso</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Procedia computer science</title>
		<imprint>
			<biblScope unit="volume">122</biblScope>
			<biblScope unit="page" from="300" to="307" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b50">
	<analytic>
		<title level="a" type="main">Cryptocurrency forecasting with deep learning chaotic neural networks</title>
		<author>
			<persName><forename type="first">S</forename><surname>Lahmiri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bekiros</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Chaos, Solitons &amp; Fractals</title>
		<imprint>
			<biblScope unit="volume">118</biblScope>
			<biblScope unit="page" from="35" to="40" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b51">
	<analytic>
		<title level="a" type="main">A deep learning approach for credit scoring of peer-to-peer lending using attention mechanism lstm</title>
		<author>
			<persName><forename type="first">C</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Luo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="2161" to="2168" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b52">
	<analytic>
		<title level="a" type="main">Sentiment analysis based on gated recurrent unit</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Santur</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2019 International Artificial Intelligence and Data Processing Symposium (IDAP), IEEE</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1" to="5" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b53">
	<analytic>
		<title level="a" type="main">Classification performance using gated recurrent unit recurrent neural network on energy disaggregation</title>
		<author>
			<persName><forename type="first">J</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2016 international conference on machine learning and cybernetics (ICMLC)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="105" to="110" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b54">
	<analytic>
		<title level="a" type="main">A hybrid deep learning model for consumer credit scoring</title>
		<author>
			<persName><forename type="first">B</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yuan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2018 International Conference on Artificial Intelligence and Big Data (ICAIBD), IEEE</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="205" to="208" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b55">
	<analytic>
		<title level="a" type="main">Deep learning-based sentiment classification of evaluative text based on multi-feature fusion</title>
		<author>
			<persName><forename type="first">A</forename><surname>Abdi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Shamsuddin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Hasan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Piran</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">formation Processing &amp; Management</title>
		<imprint>
			<biblScope unit="volume">56</biblScope>
			<biblScope unit="page" from="1245" to="1259" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b56">
	<analytic>
		<title level="a" type="main">A local and global event sentiment based efficient stock exchange forecasting using deep learning</title>
		<author>
			<persName><forename type="first">H</forename><surname>Maqsood</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Mehmood</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Maqsood</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Yasir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Afzal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Aadil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Selim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Muhammad</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Information Management</title>
		<imprint>
			<biblScope unit="volume">50</biblScope>
			<biblScope unit="page" from="432" to="451" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b57">
	<analytic>
		<title level="a" type="main">Deep learning models for bankruptcy prediction using textual disclosures</title>
		<author>
			<persName><forename type="first">F</forename><surname>Mai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ma</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">European journal of operational research</title>
		<imprint>
			<biblScope unit="volume">274</biblScope>
			<biblScope unit="page" from="743" to="758" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b58">
	<analytic>
		<title level="a" type="main">Forecasting of forex time series data based on deep learning</title>
		<author>
			<persName><forename type="first">L</forename><surname>Ni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Qi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Procedia computer science</title>
		<imprint>
			<biblScope unit="volume">147</biblScope>
			<biblScope unit="page" from="647" to="652" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b59">
	<analytic>
		<title level="a" type="main">Portfolio management via two-stage deep learning with a joint cost</title>
		<author>
			<persName><forename type="first">H</forename><surname>Yun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">S</forename><surname>Kang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Seok</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Expert Systems with Applications</title>
		<imprint>
			<biblScope unit="volume">143</biblScope>
			<biblScope unit="page">113041</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b60">
	<analytic>
		<title level="a" type="main">Dsanet: Dual self-attention network for multivariate time series forecasting</title>
		<author>
			<persName><forename type="first">S</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Tang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 28th ACM International Conference on Information and Knowledge Management</title>
				<meeting>the 28th ACM International Conference on Information and Knowledge Management</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="2129" to="2132" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b61">
	<analytic>
		<title level="a" type="main">Continuous control with stacked deep dynamic recurrent reinforcement learning for portfolio optimization</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Aboussalah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C.-G</forename><surname>Lee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Expert Systems with Applications</title>
		<imprint>
			<biblScope unit="volume">140</biblScope>
			<biblScope unit="page">112891</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b62">
	<analytic>
		<title level="a" type="main">Financial quantitative investment using convolutional neural network and deep learning technology</title>
		<author>
			<persName><forename type="first">C</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Liu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="volume">390</biblScope>
			<biblScope unit="page" from="384" to="390" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
