<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Deep Learning Methods Application in Finance: A Review of State of Art</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dovil ė Kuizinien ė</string-name>
          <email>dovile.kuiziniene@vdu.lt</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tomas Krilavičius</string-name>
          <email>tomas.krilavicius@vdu.lt</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Applied Informatics Vytautas Magnus University Kaunas</institution>
          ,
          <country country="LT">Lithuania</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>IVUS 2020: Information Society and University Studies</institution>
          ,
          <addr-line>23</addr-line>
        </aff>
      </contrib-group>
      <fpage>59</fpage>
      <lpage>69</lpage>
      <abstract>
        <p>Artificial intelligence uses in financial markets or business units forms financial innovations. These innovations are the key indicator for economic grow and intelligent finance system formation. Recants years scientist and most innovation driving companies, such as Google, IBM, Microsoft and other, are focusing in deep learning methods. These methods have achieved significant performances in diverse areas: image recognition, natural language processing, speech recognition, video processing, etc. Therefore, it is necessary to understand the variety of deep learning methods and only then their applicability in the financial field. Accordingly, in this paper firstly is presented diferences in science community already settled deep learning method's architectures. Secondly, is shown a big picture of developing scientific articles of deep learning uses in finance field, where the most used deep learning methods were identified. Finally, the conclusion, limitations and future work have been shown.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Artificial intelligence</kwd>
        <kwd>Machine Learning</kwd>
        <kwd>Deep Learning</kwd>
        <kwd>Convolution Neural Network</kwd>
        <kwd>Deep Belief Network</kwd>
        <kwd>Deep Boltzmann Machine</kwd>
        <kwd>Deep neural network</kwd>
        <kwd>Deep Q-Learning</kwd>
        <kwd>Deep reinforcement learning</kwd>
        <kwd>The extreme learning machine</kwd>
        <kwd>Generative adversarial network</kwd>
        <kwd>Recurrent Neural Learning</kwd>
        <kwd>Long short-term memory</kwd>
        <kwd>Gated Recurrent Unit</kwd>
        <kwd>Finance</kwd>
        <kwd>Financial innovations</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The global financial industry is quietly changing
under the catalysis of artificial intelligence (AI) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. AI
represents a clear opportunity to advance the
transformation of the finance industry by providing users with
greater value and increasing firms’ revenues [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The
goal of AI is to invent a machine which can sense,
remember, learn, and recognize like a real human being
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The deep integration of AI technology and finance
is the inevitable result of deepening development and
Exploring Innovation in these fields [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. These
innovations have the potential to directly influence both
the production and the characteristics of a wide range
of products and services, with important implications
for productivity, employment, and competition [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. AI
also improve work eficiency at the business and
create a whole process of intelligent finance [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
Applications of AI systems are generally viewed as positive
for economic growth and productivity [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Deep
learning is a recently-developed field belonging to
Artificial Intelligence [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. It attempts to learn hierarchical
representations from raw data and is capable of
learning simple concepts first and then successfully
building up more complex concepts by merging the simpler
ones [
        <xref ref-type="bibr" rid="ref5 ref6 ref7">5, 6, 7</xref>
        ]. Companies such as Google, Facebook,
IBM, Microsoft and others use this algorithm for
developing next-generation intelligent applications [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
In finances there are two major problems:1) to predict
future returns (i.e., stock prices, currencies, indices,
product demand); or 2) to make categorical
classification (i.e. credit scoring (“good, “bad”), bankruptcy
(“True”, “False”)). While the issues in finances remain
almost the same over the last several decades, novel
methods, and growing amount of data are changing
the field, especially Machine Learning and Artificial
Intelligence techniques [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Furthermore, exploitation
of additional data sources allows to achieve better
results, e.g. satellite images can be used for predicting
economic activity, voice information provides
information about emotions, textual information, extracted
from news and comments gives sentiments of writers
and audience, etc [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. However, extraction of useful
knowledge out of such data heap is not trivial, it
requires considerable efort [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ]. Portfolio
management tasks have more challenges, because there are
two main issues with portfolio formation: (1)
selection of assets with highest revenue, and (2)
determination other value composition of assets in the portfolio
to achieve the goal of maximal potential returns with
minimal risk [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Therefore, this paper is divided in
two parts: 1) diferent deep learning architectures are
discussed; 2) application of the aforementioned
methods in finances is discussed.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Literature review</title>
      <p>
        unit (image, video, time series). This layer produces
huge amount of features that makes overfitting
probThe term "artificial intelligence" is applied when a ma- lems and expensive computation [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Pooling leayers
chine mimics "cognitive" functions that humans asso- reducses this problem by aggregating multiple feature
ciate with other human minds, such as "learning" and values into a single value. Max-pooling is mostly used
"problem solving” [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. In other words, it tries to mimic pooling operation, in Keras instead of this operation
the human brain, which is capable of processing the could be used Average-pooling, Global-max-pooling
complex input data, learning diferent knowledges in- or Global-average-pooling operations [20]. Rectified
tellectually and fast, and solving diferent kinds of com- linear unit (ReLU) is an activation function meant to
plicated tasks well [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. zero out negative values, whereas a sigmoid “squashes”
      </p>
      <p>
        AI has been part of human thoughts and slowly evolv- arbitrary values into the interval [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] producing
someing in academic research labs [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Machine learning thing that can be interpreted as a probability [19].
is the subset of AI. Machine learning is the study of These three operations are repeated over tens or
huncomputer algorithms that can be improved automat- dreds of layers, with each layer learning to detect
difically through experience [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Machine learning al- ferent features [16]. The classification phase consists
gorithms overcome following strictly static program from two layers dropout and fully connected. Dropout
instructions by making data-driven predictions or de- consists of randomly dropping out (setting to zero) a
cisions, through building a model from sample inputs number of output features of the layer during
train[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. In machine learning, artificial neural networks ing [19].
are a family of models that mimic the structural ele- The fully connected layer that produces a vector of
gance of the neural system and learn patterns inherent K dimensions where K is the number of classes that
in observations [15], see Fig. 1. The term “deep” refers the network will be able to predict. This vector
conto the number of layers in the network—the more lay- tains the probabilities for each class of any image being
ers, the deeper the network [16]. Traditional neural classified [16, 21]. The quality of model is evaluated
networks contain only 2 or 3 layers, while deep net- by the cost function in fully connected layer (sigmoid,
works can have hundreds [16]. Deep learning has been softmax or other).
explosively developed today. Compared with shallow
learning, deep learning reaches the state of arts in many 2.2. Deep Belief Network
researches [17].
      </p>
      <p>
        In contrast to the shallow architectures like kernel The power of Deep Belief Network (DBN) (Fig. 3 and
machines which only contain a fixed feature layer (or Fig. 4) lies in their ability to reconstruct both the input
base function) and a weight-combination layer (usu- vector and the learning feature vectors, which is
imally linear), deep architectures refers to the multi-layer plemented using a layer-by-layer learning strategy [22].
network where each two adjacent layers are connected Each layer of a DBN consists of a Restricted
Boltzto each other in some way [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. This introduces the un- mann Machines (RBM). RBMs follow the principle of
precedented flexibility to model even highly complex, the probability distribution to complete its learning
cynon-linear relationships between predictor and out- cle [23]. Each RBM is concluded from a visible layer (v)
come variables, a quality that has allowed deep neural and a hidden layer (h). Number of neurons is set up
networks to outperform models from traditional ma- in each layer. The neurons between diferent layers
chine learning in a variety of tasks [18]. Deep learn- are fully connected, and the neurons in the same layer
ing methods have only now become so powerful, due are not connected [23]. When an RBM has learned, its
to technical reasons of: computational power (hard- feature activations are used as the “data” for training
ware), availability of large datasets and optimization the next RBM in the DBNs [24]. RBMs is as an
unsualgorithms [18],[19]. pervised network which considers the visible layer to
the hidden layer as a subnetwork. Then, this hidden
2.1. Convolution Neural Network layer is considered as a visible layer to the next layer
and so on [24]. The higher-level features are learned
Convolution neural network (CNN) algorithm is sepa- from the previous layers and the higher-level features
rated into two main parts: feature detection and clas- are believed to be more complicated and better reflects
sification (see Fig. 2). the information contained in the input data’s
struc
      </p>
      <p>
        Feature detection phase consist from convolution, tures [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. DBN training is divided into two steps:
forpooling and rectified linear unit (ReLU) layers. Convo- ward pre-training process and reverse fine-tuning
prolutional filters activates certain features from data set cess [25]. During the pre-training phase, the RBMs are
      </p>
      <sec id="sec-2-1">
        <title>2.5. Deep Q-Learning or Deep</title>
      </sec>
      <sec id="sec-2-2">
        <title>Reinforcement Learning</title>
        <p>
          Deep Q-Learning (DQL) or Deep reinforcement
learning (DRL) concept is replaceable in scientific literature
(6). In DQL is always used reinforcement learning
algorithm, or in DRL is often used Q-learning function,
because of it is dealing with high-dimensional state
space inputs [
          <xref ref-type="bibr" rid="ref17">30</xref>
          ], [
          <xref ref-type="bibr" rid="ref18">31</xref>
          ]. A reinforcement learning (RL)
process involves an agent learning from interactions
Figure 1: The connection of AI, ML and DL with its environment in discrete time steps in order to
update its mapping between the perceived state and a
probability of selecting possible actions (policy) [
          <xref ref-type="bibr" rid="ref19">32</xref>
          ].
trained one-by-one until the hidden layer of the last In other words, RL is commonly used to solve an
seRBM. During this phase, the parameter of each RBM quential decision making problem [
          <xref ref-type="bibr" rid="ref17">30</xref>
          ]. The RL
probcan be obtained [23]. The back-propagation network lem is normally formalized using the Markov decision
(BP) is set in the last hidden layer of the DBN [25]. BP process (MDP) and includes a set of states S, set of
acis applied to fine tune the parameter using the output tions A, transition function T as action distributions,
labels of the sample data [23]. reward function R and discount factor  [
          <xref ref-type="bibr" rid="ref20">33</xref>
          ]. The
solution to the MDP is a policy  : S → A and the
pol2.3. Deep Boltzmann Machine icy should maximize the expected discounted
cumulative reward [
          <xref ref-type="bibr" rid="ref17">30</xref>
          ]. Q-learning, as a typical
reinforceDeep Boltzmann Machine (DBM) have only one undi- ment learning approach, mimics human behaviors to
rected network [24]. DBM as DBN is comprised of a take actions to the environment, in order to obtain the
Restricted Boltzmann Machines (RBM). The main dif- maximum long-term rewards [
          <xref ref-type="bibr" rid="ref21">34</xref>
          ]. The DQL process
ference is related to the interaction among layers of can be viewed as iteratively optimizing network
paRBMs [25]. For the computation of the conditional rameters process according to gradient direction of the
probability of the hidden units h1, both the lower visi- loss function at each stage [
          <xref ref-type="bibr" rid="ref22">35</xref>
          ]. Therefore, the
inexble layer v and the upper hidden layer h2 are incorpo- act approximate gradient estimation with a large
varirated, that makes DBM diferentiated from DBN and ance can largely deteriorate the representation
perforalso more robust to noisy observation [15]. There are mance of deep Q network by driving the network
pano direct connections between the units in the same rameter deviated from the optimal setting, causing the
layers. DBM parameters of all layers can be optimized large variability of DQL performance [
          <xref ref-type="bibr" rid="ref22">35</xref>
          ]. The
advanjointly by following the approximate gradient of a vari- tages of deep Q-learning is good results and ease of use
ational lower-bound on the likelihood objective [26]. (code can be modified easy for diferent physical
prob
        </p>
        <p>
          Diferent from the DBN, the DBM can incorporate lems) [
          <xref ref-type="bibr" rid="ref23">36</xref>
          ].
top-down feedback, which can better propagate
uncertainty and hence deal with ambiguous input more
robustly [27]. 2.6. The Extreme Learning Machine
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.4. Deep Neural Network</title>
        <p>
          Due to the novelty of the concept Deep neural network
(DNN) (Fig. 5)in the scientific literature can be
identiifed for all the algorithms analyzed in this paper.
However, in recent years the concept of DNN has become
known as Artificial Neural Network (ANN) with
hidden layers [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] [
          <xref ref-type="bibr" rid="ref15">28</xref>
          ]. DNN typically is feedforward
network so it can be understood as the Multilayer
Perceptron (MLP or MP). MLP consists of an input layer,
several hidden layers and one output layer ant it’s widely
used for pattern classification, recognition and
prediction [
          <xref ref-type="bibr" rid="ref16">29</xref>
          ].
        </p>
        <sec id="sec-2-3-1">
          <title>The extreme learning machine (ELM) is a single-hidden</title>
          <p>
            layer feedforward network, proposed by Huang in 2012.
In the traditional feed-forward ANN, the training of
the network is iterative, while the process is
transformed into an analytical equation in the ELM [
            <xref ref-type="bibr" rid="ref24">37</xref>
            ].
In ELM the weights between input and hidden layer
are assigned randomly following a normal distribution
and the weights between hidden and output layers are
learnt in single step by a pseudo-inverse technique.
During the training, the hidden layer is not learned
but the weight matrix of output layer is obtained by
solving the optimization problem formulated by some
learning criteria and regularizations [
            <xref ref-type="bibr" rid="ref25">38</xref>
            ], as showed
in the theory the output weights solved from
regularized least squares problem [
            <xref ref-type="bibr" rid="ref26">39</xref>
            ]. Therefore, ELM ofers
benefits such as fast learning speed, ease of
implementation, and less human intervention when compared to
the standard neural networks [
            <xref ref-type="bibr" rid="ref27">40</xref>
            ]
          </p>
        </sec>
      </sec>
      <sec id="sec-2-4">
        <title>2.7. Generative Adversarial Network</title>
        <p>same time, and finally the output is almost the same as
the real data [44].</p>
        <p>
          The general idea of Generative adversarial network
(GAN) is that it aims to train a generator to recon- 2.8. Recurrent Neural Learning
struct high-resolution images for fooling a
discriminator that is trained to distinguish generative images Recurrent Neural Learning (RNN) (Fig. 9) is diferent
from real ones [41] (Fig. 8). This idea involves two from the traditional feedforward neural networks,
becompeting neural network models: one of them takes cause have feedback connections, which can be
benoise as input and produces some samples (genera- tween hidden units or from the output to the hidden
tor) and the other model (discriminator) accepts both units [44, 45]. This connections address the temporal
the data outputted by the generator and the real data, relationship of inputs by maintaining internal states
meanwhile, separates their sources [42]. The Discrim- that have memory . An RNN is able to process the
inator trains itself to discriminate real data and gen- sequential inputs by having a recurrent hidden state
erated data better while the Generator trains itself to whose activation at each step depends on that of the
ift the real data distribution so as to fool Discrimina- previous step [
          <xref ref-type="bibr" rid="ref5">5, 46</xref>
          ]. In other words, RNN not only
tor [43]. These two neural networks are trained at the processes the current element in the sequence, but also
        </p>
        <sec id="sec-2-4-1">
          <title>However, it has been observed that it is dificult to train the RNNs to deal with long-term sequential data, as the gradients tend to vanish [5].</title>
        </sec>
      </sec>
      <sec id="sec-2-5">
        <title>2.9. Long short-term memory</title>
        <p>
          rent information should be treated as input in order to
generate the current state [51], whilst the forget gate
determine which information to be forgotten from the
memory state [52]. Finally, the output gate filters the
information that can be actually treated as significant
and produces the output [52]. The “gate” structure is
implemented using the sigmoid function, which
denotes how much information can be allowed to pass.
For one hidden layer in LSTM, activation function is
used in forward propagation, and gradient is used in
backward propagation [
          <xref ref-type="bibr" rid="ref25">38</xref>
          ].
        </p>
      </sec>
      <sec id="sec-2-6">
        <title>2.10. Gated recurrent unit</title>
        <sec id="sec-2-6-1">
          <title>Gated Recurrent Unit (GRU) is aimed to solve the van</title>
          <p>
            ishing gradient problem which comes with a standard
RNN [
            <xref ref-type="bibr" rid="ref28">53</xref>
            ]. GRU consists of two gates: update gate (zt)
and reset gate (rt). Update gate decides how much
the unit updates its activation, or content, and reset
gate allows forget the previously computed state [
            <xref ref-type="bibr" rid="ref29">54</xref>
            ].
GRU is a less complex compared with LSTM, it does
not possess any internal memory and output gate like
LSTM [49].
          </p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Application of Deep Learning</title>
    </sec>
    <sec id="sec-4">
      <title>Methods</title>
      <sec id="sec-4-1">
        <title>Articles were included from electronic libraries: Sci</title>
        <p>ence Direct, IEEE, Scopus, ACM, Emerald,
SpringerLink, JSTOR, EBSCO and others.</p>
        <p>Analyzed period started from 2017 till 2020. The
review was conducted in January 2020. Keywords “Deep
learning” and “Finance” were used for the article’s
selection. All methods presented in this review matches
a term “Deep learning”, wherefore individually search
by each method was not developed. The same with a
term “Finance”, which includes accounting, financial
markets, risks and etc. Therefore, this paper presents
a big picture of developing scientific articles in Deep
learning in Finance category. 33 papers were selected
and analyzed. The analyzed articles can be categorized
by the problematic of given task: to predict future
returns or two make classification of results. Sometimes,
for better results are used natural language processing
algorithms (Fig. 12 and Tab 1).</p>
        <p>
          The classification algorithms in finance most often
have been applied for credit scoring, which divides
loans into “good” and “bad”. For this problem
solving author’s used DBN [
          <xref ref-type="bibr" rid="ref16">29</xref>
          ], modified LSTM [52] and
CNN [
          <xref ref-type="bibr" rid="ref30">55</xref>
          ] networks. The results cannot be compared
due to diferent classifier evaluation methods used and
data source diferences. In credit scoring topic is a big
problem with unbalanced data set, i.e. authors [
          <xref ref-type="bibr" rid="ref30">55</xref>
          ]
used data set, where credit worthy instances were 91,55
proc. and CNN accuracy rate was 91,64 proc. In the
bankruptcy and investment market structure was used
CNN network or in tax evasion DQL network.
Articles in financial field is interested to obtain knowledge
from words and used it as some indicators. Therefore,
is seen a trend to use natural language processing
techniques. The goal of natural language processing (NLP)
is to process text using the computational linguistics,
text analysis, machine learning, statistical and
linguistics knowledge in order to analyze and extract
significant information [
          <xref ref-type="bibr" rid="ref31">56</xref>
          ]. Researches in financial field
are using sentiment analysis for better stock price
prediction or bankruptcy classification. Sentiment
analysis is the essential task for NLP, which can be divided
into three categories: lexicon-based sentiment
analysis, machine learning-based sentiment analysis and
the hybrid approach [
          <xref ref-type="bibr" rid="ref31">56</xref>
          ].
        </p>
        <p>
          Lexicon-based sentiment analysis was used only in
one article [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], due to the need of opinion lexicon in
this field. Machine learning-based sentiment
analysis uses in bag-of-words method [48], [
          <xref ref-type="bibr" rid="ref32">57</xref>
          ] and word
embeddings [
          <xref ref-type="bibr" rid="ref10 ref32 ref33">48, 58, 57, 10</xref>
          ] with CNN [
          <xref ref-type="bibr" rid="ref32 ref33">58, 57</xref>
          ], LSTM
[
          <xref ref-type="bibr" rid="ref10">48, 10</xref>
          ] methods.
        </p>
        <p>
          In [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] research was used bag of words and word
embeddings methods for LSTM, results showed that
LSTM models can outperform all traditional machine
learning models based on the bag-of-words approach,
especially when further pre-train word embeddings
with transfer learning. The main financial article’s
focus is in future returns prediction, especially in stock
prices or stock indexes. The main reason is data source
availability for scientific research. In this field very
often, scientific researches combine diferent methods
together [
          <xref ref-type="bibr" rid="ref34 ref35">49, 59, 60</xref>
          ] or make some model
modification’s [
          <xref ref-type="bibr" rid="ref13 ref36 ref37">13, 50, 61, 62</xref>
          ] for better prediction results. Some
authors [
          <xref ref-type="bibr" rid="ref38">48, 63</xref>
          ] analyze several diferent deep learning
models results for the deeper future model
development, see Fig. 13.
        </p>
        <p>Most popular methods are CNN and LSTM.
However, DBM and GAN method’s was not found any
adjustment in finance field.</p>
        <p>In some papers data is not normalized, i.e.
cryptocurrency prices [51] or demand [18]. Therefore,
predictive accuracy measurements, such as RMSE, MPE
and others, can be comparable with diferent other
authors works or sometimes even in the same paper, i.e.
RMSE for Bitcoin is 2.75×103 or for Ripple 0.0499 [51].</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>4. Conclusions</title>
      <p>learning machine, generative adversarial network,
recurrent neural learning, long short-term memory, gated
recurrent unit; and it’s applicability in finance field.
This review reveals that financial article’s:
1. mainly focus for the forecasting task than
classification;
2. starts using natural language processing
techniques, mostly sentiment analysis, for better
results prediction;
3. uses not ‘basic’ the deep learning methods, i.e.
they are often combined with several diferent
models or merged to voting classifier.</p>
      <sec id="sec-5-1">
        <title>Furthermore, this analysis has shown the importance</title>
        <p>of balanced data set and normalization of the data, which
is submitted to deep learning networks.</p>
        <p>The main limitation of this work is representation
only a big picture of developing scientific articles in
Deep learning in Finance category. Therefore, in
future research is needed to extend search keyword’s in
electronic libraries, i.e. search by each method</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <source>Financial innovation based on artificial intelligence technologies</source>
          ,
          <source>in: Proceedings of the 2019 International Conference on Artificial Intelligence and Computer Science</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>750</fpage>
          -
          <lpage>754</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P.</given-names>
            <surname>Yeoh</surname>
          </string-name>
          , Artificial intelligence: accelerator or [15]
          <string-name>
            <surname>Suk</surname>
          </string-name>
          ,
          <string-name>
            <surname>Heung-Il</surname>
          </string-name>
          ,
          <article-title>An introduction to neural netpanacea for financial crime?, Journal of Finan- works and deep learning</article-title>
          ,
          <source>in: Deep Learning for cial Crime</source>
          (
          <year>2019</year>
          ).
          <article-title>Medical Image Analysis</article-title>
          , Elsevier,
          <year>2017</year>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>24</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Mo</surname>
          </string-name>
          ,
          <article-title>A survey on deep learning: one small step [16] MATHWORKS, Introducing Deep Learning with toward ai, Dept</article-title>
          . Computer Science, Univ. of New MATLAB, MATHWORKS,
          <year>2020</year>
          . Mexico, USA (
          <year>2012</year>
          ). [17]
          <string-name>
            <surname>Zheng</surname>
          </string-name>
          , Chao, Wang, Shaorong, Liu, Yilu, Liuand,
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>I. M.</given-names>
            <surname>Cockburn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Henderson</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Stern,</surname>
          </string-name>
          <article-title>The im- Chengxi, A novel rnn based load modelling pact of artificial intelligence on innovation, Tech- method with measurement data in active distrinical Report, National bureau of economic re- bution system</article-title>
          ,
          <source>Electric Power Systems Research search</source>
          ,
          <year>2018</year>
          .
          <volume>166</volume>
          (
          <year>2019</year>
          )
          <fpage>112</fpage>
          -
          <lpage>124</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L.</given-names>
            <surname>Mou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ghamisi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X. X.</given-names>
            <surname>Zhu</surname>
          </string-name>
          , Deep recurrent [18]
          <string-name>
            <surname>Kraus</surname>
          </string-name>
          , Mathias, Feuerriegel, Stefan, Oztekin,
          <article-title>neural networks for hyperspectral image classi- Asil, Deep learning in business analytics and opifcation</article-title>
          ,
          <source>IEEE Transactions on Geoscience and erations research: Models, applications and manRemote Sensing</source>
          <volume>55</volume>
          (
          <year>2017</year>
          )
          <fpage>3639</fpage>
          -
          <lpage>3655</lpage>
          . agerial implications,
          <source>European Journal of Oper-</source>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>F.</given-names>
            <surname>Beritelli</surname>
          </string-name>
          , G. Capizzi,
          <string-name>
            <given-names>G. Lo</given-names>
            <surname>Sciuto</surname>
          </string-name>
          , C. Napoli, ational Research 281 (
          <year>2020</year>
          )
          <fpage>628</fpage>
          -
          <lpage>641</lpage>
          . M.
          <article-title>Woźniak, A novel training method to preserve [19] Chollet, Francois, Deep Learning mit Python und generalization of rbpnn classifiers applied to ecg Keras: Das Praxis-Handbuch vom Entwickler signals diagnosis</article-title>
          ,
          <source>Neural Networks</source>
          <volume>108</volume>
          (
          <year>2018</year>
          )
          <article-title>der Keras-Bibliothek,</article-title>
          <string-name>
            <surname>MITP-Verlags</surname>
            <given-names>GmbH</given-names>
          </string-name>
          &amp; Co.
          <fpage>331</fpage>
          -
          <lpage>338</lpage>
          . KG,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>F.</given-names>
            <surname>Beritelli</surname>
          </string-name>
          , G. Capizzi,
          <string-name>
            <given-names>G. Lo</given-names>
            <surname>Sciuto</surname>
          </string-name>
          , C. Napoli, [
          <volume>20</volume>
          ]
          <article-title>keras, Guide to the Sequential model - Keras DocF</article-title>
          . Scaglione,
          <article-title>Rainfall estimation based on the in- umentation</article-title>
          , keras,
          <year>2020</year>
          .
          <article-title>tensity of the received signal in a lte/4g mobile</article-title>
          [21]
          <string-name>
            <given-names>G.</given-names>
            <surname>Capizzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. L.</given-names>
            <surname>Sciuto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Monforte</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Napoli, terminal by using a probabilistic neural network, Cascade feed forward neural network-based IEEE Access 6 (</article-title>
          <year>2018</year>
          )
          <fpage>30865</fpage>
          -
          <lpage>30873</lpage>
          .
          <article-title>model for air pollutants evaluation of single mon-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>J. Hearty,</surname>
          </string-name>
          <article-title>Advanced Machine Learning with itoring stations in urban areas</article-title>
          ,
          <source>International Python, Packt Publishing Ltd</source>
          ,
          <year>2016</year>
          . Journal of Electronics and Telecommunications
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>O.</given-names>
            <surname>Lachiheb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Gouider</surname>
          </string-name>
          , A hierarchical deep
          <volume>61</volume>
          (
          <year>2015</year>
          )
          <fpage>327</fpage>
          -
          <lpage>332</lpage>
          .
          <article-title>neural network design for stock returns predic-</article-title>
          [22]
          <string-name>
            <given-names>D.</given-names>
            <surname>Saif</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>El-Gokhy</surname>
          </string-name>
          , E. Sallam, Deep belief tion,
          <source>Procedia Computer Science</source>
          <volume>126</volume>
          (
          <year>2018</year>
          )
          <article-title>264- networks-based framework for malware detec272. tion in android systems</article-title>
          , Alexandria engineering
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Katayama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Tsuda</surname>
          </string-name>
          , A method of sen- journal
          <volume>57</volume>
          (
          <year>2018</year>
          )
          <fpage>4049</fpage>
          -
          <lpage>4057</lpage>
          . timent polarity identification in financial news [23]
          <string-name>
            <surname>Balakrishnan</surname>
          </string-name>
          , Nagaraj, Rajendran, Arunkumar, using deep learning, Procedia Computer Science Pelusi, Danilo, Ponnusamy, Vijayakumar, Deep
          <volume>159</volume>
          (
          <year>2019</year>
          )
          <fpage>1287</fpage>
          -
          <lpage>1294</lpage>
          .
          <article-title>belief network enhanced intrusion detection sys-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>M.-Y. Day</surname>
            ,
            <given-names>C.-C.</given-names>
          </string-name>
          <string-name>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>Deep learning for financial tem to prevent security breach in the internet of sentiment analysis on finance news providers</article-title>
          , in: things, Internet of Things (
          <year>2019</year>
          )
          <fpage>100112</fpage>
          . 2016 IEEE/ACM International Conference on Ad- [24]
          <string-name>
            <given-names>J.</given-names>
            <surname>Karhunen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Raiko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Cho</surname>
          </string-name>
          ,
          <article-title>Unsupervised deep vances in Social Networks Analysis and Mining learning: A short review</article-title>
          ,
          <source>in: Advances in Inde(ASONAM)</source>
          , IEEE,
          <year>2016</year>
          , pp.
          <fpage>1127</fpage>
          -
          <lpage>1134</lpage>
          . pendent
          <string-name>
            <given-names>Component</given-names>
            <surname>Analysis</surname>
          </string-name>
          and Learning Ma-
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>G.</given-names>
            <surname>Capizzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. Lo</given-names>
            <surname>Sciuto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          , D. Polap, chines, Elsevier,
          <year>2015</year>
          , pp.
          <fpage>125</fpage>
          -
          <lpage>142</lpage>
          . M. Wozniak,
          <source>Small lung nodules detection based [25] Fan</source>
          , Chaodong, Ding, Changkun, Zheng,
          <article-title>Jinon fuzzy-logic and probabilistic neural network hua, Xiao, Leyi, Ai, Zhaoyang, Empirical mode with bioinspired reinforcement learning</article-title>
          ,
          <source>IEEE decomposition based multi-objective deep belief Transactions on Fuzzy Systems</source>
          <volume>28</volume>
          (
          <year>2020</year>
          ).
          <article-title>network for short-term power load forecasting,</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Wuyu</given-names>
          </string-name>
          , Li, Weizi, Zhang, Ning, Liu, Neurocomputing
          <volume>388</volume>
          (
          <year>2020</year>
          )
          <fpage>110</fpage>
          -
          <lpage>123</lpage>
          . Kecheng, Portfolio formation with preselec- [26]
          <string-name>
            <given-names>N.</given-names>
            <surname>Srivastava</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. R.</given-names>
            <surname>Salakhutdinov</surname>
          </string-name>
          ,
          <article-title>Multimodal tion using deep learning from long-term finan- learning with deep boltzmann machines, in: Adcial data</article-title>
          ,
          <source>Expert Systems with Applications 143 vances in neural information processing systems</source>
          , (
          <year>2020</year>
          )
          <fpage>11</fpage>
          -
          <lpage>42</lpage>
          .
          <year>2012</year>
          , pp.
          <fpage>2222</fpage>
          -
          <lpage>2230</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Ongsulee</surname>
            , Pariwat, Artificial intelligence, ma- [27]
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>He</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          <string-name>
            <surname>Gao</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>He</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          <string-name>
            <surname>Ji</surname>
          </string-name>
          ,
          <article-title>Emotion chine learning and deep learning, in: 2017 15th recognition from thermal infrared images using International Conference on ICT and Knowledge deep boltzmann machine, Frontiers of Computer Engineering (ICT&amp;KE)</article-title>
          , IEEE,
          <year>2017</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . Science 8 (
          <year>2014</year>
          )
          <fpage>609</fpage>
          -
          <lpage>618</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>K.</given-names>
            <surname>Akyol</surname>
          </string-name>
          ,
          <article-title>Comparing of deep neural networks nary imbalanced learning</article-title>
          ,
          <source>Neural Networks 119 and extreme learning machines based on grow-</source>
          (
          <year>2019</year>
          )
          <fpage>235</fpage>
          -
          <lpage>248</lpage>
          .
          <article-title>ing and pruning approach, Expert Systems with</article-title>
          [41]
          <string-name>
            <given-names>R.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Meng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yue</surname>
          </string-name>
          , Applications
          <volume>140</volume>
          (
          <year>2020</year>
          ) 112875. L. Zhang,
          <article-title>Learning spectral and spatial features</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>C.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <article-title>A deep learning approach based on generative adversarial network for hyfor credit scoring using credit default swaps, En- perspectral image super-resolution</article-title>
          ,
          <source>in: IGARSS gineering Applications of Artificial Intelligence</source>
          <year>2019</year>
          -2019
          <string-name>
            <given-names>IEEE</given-names>
            <surname>International</surname>
          </string-name>
          <article-title>Geoscience and Re65 (</article-title>
          <year>2017</year>
          )
          <fpage>465</fpage>
          -
          <lpage>470</lpage>
          . mote Sensing Symposium, IEEE,
          <year>2019</year>
          , pp.
          <fpage>3161</fpage>
          -
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>W.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yi</surname>
          </string-name>
          ,
          <article-title>Deep q-learning to 3164. preserve connectivity in multi-robot systems</article-title>
          , in: [42]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Cui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Colorless video rendering sysProceedings of the 9th International Conference tem via generative adversarial networks</article-title>
          ,
          <source>in: 2019 on Signal Processing Systems</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>45</fpage>
          -
          <lpage>50</lpage>
          . IEEE International Conference on Artificial In-
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [31]
          <string-name>
            <surname>Matta</surname>
          </string-name>
          ,
          <string-name>
            <surname>Cardarilli</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Nunzio</surname>
          </string-name>
          , Fazzolari, Giardino, telligence and Computer Applications (ICAICA), Nannarelli, Re, Spano, A reinforcement learning- IEEE,
          <year>2019</year>
          , pp.
          <fpage>464</fpage>
          -
          <lpage>467</lpage>
          . based qam/psk symbol synchronizer, Ieee Access [43]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhai</surname>
          </string-name>
          ,
          <article-title>Identity-preserving conditional 7 (</article-title>
          <year>2019</year>
          )
          <fpage>124147</fpage>
          -
          <lpage>124157</lpage>
          .
          <article-title>generative adversarial network</article-title>
          , in: 2018 Inter-
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ramicic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bonarini</surname>
          </string-name>
          ,
          <article-title>Attention-based expe- national Joint Conference on Neural Networks rience replay in deep q-learning</article-title>
          ,
          <source>in: Proceedings (IJCNN)</source>
          , IEEE,
          <year>2018</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          . of the 9th International Conference on Machine [44]
          <string-name>
            <given-names>I.</given-names>
            <surname>Goodfellow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Courville</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          , Learning and Computing,
          <year>2017</year>
          , pp.
          <fpage>476</fpage>
          -
          <lpage>481</lpage>
          .
          <article-title>Deep learning</article-title>
          , volume
          <volume>1</volume>
          , MIT press Cambridge,
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>H.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hashimoto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Matsuda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Taniguchi</surname>
          </string-name>
          ,
          <year>2016</year>
          . D.
          <string-name>
            <surname>Terada</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Guo</surname>
            , Automatic collision avoidance [45]
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Bonanno</surname>
            , G. Capizzi,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Napoli</surname>
          </string-name>
          ,
          <article-title>Some remarks of multiple ships based on deep q-learning, Ap- on the application of rnn and prnn for the chargeplied</article-title>
          <source>Ocean Research</source>
          <volume>86</volume>
          (
          <year>2019</year>
          )
          <fpage>268</fpage>
          -
          <lpage>288</lpage>
          .
          <article-title>discharge simulation of advanced lithium-ions</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>C.</given-names>
            <surname>Qiu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. R.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Yao</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Zhao, battery energy storage, in: International SymBlockchain-based distributed software-defined posium on Power Electronics Power Electronics, vehicular networks via deep q-learning</article-title>
          , in: Pro- Electrical
          <string-name>
            <surname>Drives</surname>
          </string-name>
          , Automation and Motion, IEEE,
          <source>ceedings of the 8th ACM Symposium on Design</source>
          <year>2012</year>
          , pp.
          <fpage>941</fpage>
          -
          <lpage>945</lpage>
          .
          <article-title>and Analysis of Intelligent Vehicular Networks</article-title>
          [46]
          <string-name>
            <given-names>M.</given-names>
            <surname>Miljanovic</surname>
          </string-name>
          ,
          <source>Comparative analysis of recurrent and Applications</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>8</fpage>
          -
          <lpage>14</lpage>
          .
          <article-title>and finite impulse response neural networks in</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>W.-Y.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.-Y.</given-names>
            <surname>Guan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhao</surname>
          </string-name>
          , time series prediction,
          <source>Indian Journal of ComJ</source>
          . Peng,
          <article-title>Stochastic variance reduction for deep q-</article-title>
          puter
          <source>Science and Engineering</source>
          <volume>3</volume>
          (
          <year>2012</year>
          )
          <fpage>180</fpage>
          -
          <lpage>191</lpage>
          . learning, arXiv preprint arXiv:
          <year>1905</year>
          .
          <volume>08152</volume>
          (
          <year>2019</year>
          ). [47]
          <string-name>
            <given-names>C.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Fei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>He</surname>
          </string-name>
          , A deep learning
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>I.</given-names>
            <surname>Sajedian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Rho</surname>
          </string-name>
          ,
          <article-title>Design of high trans- approach for intrusion detection using recurrent mission color filters for solar cells directed by neural networks</article-title>
          ,
          <source>Ieee Access</source>
          <volume>5</volume>
          (
          <year>2017</year>
          ) 21954
          <article-title>- deep q-learning</article-title>
          ,
          <source>Solar Energy</source>
          <volume>195</volume>
          (
          <year>2020</year>
          )
          <fpage>670</fpage>
          -
          <lpage>21961</lpage>
          .
          <fpage>676</fpage>
          . [48]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kraus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Feuerriegel</surname>
          </string-name>
          ,
          <article-title>Decision support from</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>B.</given-names>
            <surname>Çil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ayyıldız</surname>
          </string-name>
          , T. Tuncer,
          <article-title>Discrimina- ifnancial disclosures with deep neural networks tion of  -thalassemia and iron deficiency ane- and transfer learning, Decision Support Systems mia through extreme learning machine</article-title>
          and regu-
          <volume>104</volume>
          (
          <year>2017</year>
          )
          <fpage>38</fpage>
          -
          <lpage>48</lpage>
          .
          <article-title>larized extreme learning machine based decision [49]</article-title>
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Balaji</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. H.</given-names>
            <surname>Ram</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. B.</given-names>
            <surname>Nair</surname>
          </string-name>
          ,
          <article-title>Applicability support system</article-title>
          ,
          <source>Medical Hypotheses</source>
          <volume>138</volume>
          (
          <year>2020</year>
          )
          <article-title>of deep learning models for stock price forecast109611. ing an empirical study on bankex data</article-title>
          , Procedia
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zeng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.-B.</given-names>
            <surname>Huang</surname>
          </string-name>
          , Unsu- computer
          <source>science 143</source>
          (
          <year>2018</year>
          )
          <fpage>947</fpage>
          -
          <lpage>953</lpage>
          .
          <article-title>pervised feature selection based extreme learn</article-title>
          - [50]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Tso</surname>
          </string-name>
          ,
          <article-title>Forecasting crude oil ing machine for clustering, Neurocomputing 386 prices: a deep learning based model</article-title>
          ,
          <source>Procedia</source>
          (
          <year>2020</year>
          )
          <fpage>198</fpage>
          -
          <lpage>207</lpage>
          . computer science 122 (
          <year>2017</year>
          )
          <fpage>300</fpage>
          -
          <lpage>307</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>H.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.-Y.</given-names>
            <surname>Hao</surname>
          </string-name>
          , T.-L. Zhang, Evo- [51]
          <string-name>
            <given-names>S.</given-names>
            <surname>Lahmiri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bekiros</surname>
          </string-name>
          ,
          <article-title>Cryptocurrency forecastlutionary extreme learning machine with sparse ing with deep learning chaotic neural networks, cost matrix for imbalanced learning</article-title>
          ,
          <source>ISA trans- Chaos, Solitons &amp; Fractals</source>
          <volume>118</volume>
          (
          <year>2019</year>
          )
          <fpage>35</fpage>
          -
          <lpage>40</lpage>
          . actions
          <volume>100</volume>
          (
          <year>2020</year>
          )
          <fpage>198</fpage>
          -
          <lpage>209</lpage>
          . [52]
          <string-name>
            <given-names>C.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Han</surname>
          </string-name>
          ,
          <string-name>
            <surname>Q</surname>
          </string-name>
          . Liu,
          <string-name>
            <given-names>S.</given-names>
            <surname>Luo</surname>
          </string-name>
          , A deep learning
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>S.</given-names>
            <surname>Shukla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. S.</given-names>
            <surname>Raghuwanshi</surname>
          </string-name>
          ,
          <article-title>Online sequential approach for credit scoring of peer-to-peer lendclass-specific extreme learning machine for bi- ing using attention mechanism lstm</article-title>
          ,
          <source>IEEE Access 7</source>
          (
          <year>2018</year>
          )
          <fpage>2161</fpage>
          -
          <lpage>2168</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [53]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Santur</surname>
          </string-name>
          ,
          <article-title>Sentiment analysis based on gated recurrent unit</article-title>
          , in: 2019
          <source>International Artificial Intelligence and Data Processing Symposium (IDAP)</source>
          , IEEE,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [54]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kim</surname>
          </string-name>
          , et al.,
          <article-title>Classification performance using gated recurrent unit recurrent neural network on energy disaggregation</article-title>
          ,
          <source>in: 2016 international conference on machine learning and cybernetics (ICMLC)</source>
          , volume
          <volume>1</volume>
          , IEEE,
          <year>2016</year>
          , pp.
          <fpage>105</fpage>
          -
          <lpage>110</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [55]
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yuan</surname>
          </string-name>
          ,
          <article-title>A hybrid deep learning model for consumer credit scoring</article-title>
          ,
          <source>in: 2018 International Conference on Artificial Intelligence and Big Data (ICAIBD)</source>
          , IEEE,
          <year>2018</year>
          , pp.
          <fpage>205</fpage>
          -
          <lpage>208</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [56]
          <string-name>
            <given-names>A.</given-names>
            <surname>Abdi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Shamsuddin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hasan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Piran</surname>
          </string-name>
          ,
          <article-title>Deep learning-based sentiment classification of evaluative text based on multi-feature fusion</article-title>
          ,
          <source>Information Processing &amp; Management</source>
          <volume>56</volume>
          (
          <year>2019</year>
          )
          <fpage>1245</fpage>
          -
          <lpage>1259</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [57]
          <string-name>
            <given-names>H.</given-names>
            <surname>Maqsood</surname>
          </string-name>
          , I. Mehmood,
          <string-name>
            <given-names>M.</given-names>
            <surname>Maqsood</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yasir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Afzal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Aadil</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Selim</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Muhammad</surname>
          </string-name>
          ,
          <article-title>A local and global event sentiment based eficient stock exchange forecasting using deep learning</article-title>
          ,
          <source>International Journal of Information Management</source>
          <volume>50</volume>
          (
          <year>2020</year>
          )
          <fpage>432</fpage>
          -
          <lpage>451</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [58]
          <string-name>
            <given-names>F.</given-names>
            <surname>Mai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Tian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lee</surname>
          </string-name>
          , L. Ma,
          <article-title>Deep learning models for bankruptcy prediction using textual disclosures</article-title>
          ,
          <source>European journal of operational research 274</source>
          (
          <year>2019</year>
          )
          <fpage>743</fpage>
          -
          <lpage>758</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [59]
          <string-name>
            <given-names>L.</given-names>
            <surname>Ni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Qi</surname>
          </string-name>
          ,
          <article-title>Forecasting of forex time series data based on deep learning</article-title>
          ,
          <source>Procedia computer science 147</source>
          (
          <year>2019</year>
          )
          <fpage>647</fpage>
          -
          <lpage>652</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [60]
          <string-name>
            <given-names>H.</given-names>
            <surname>Yun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. S.</given-names>
            <surname>Kang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Seok</surname>
          </string-name>
          ,
          <article-title>Portfolio management via two-stage deep learning with a joint cost</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>143</volume>
          (
          <year>2020</year>
          )
          <fpage>113041</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [61]
          <string-name>
            <given-names>S.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tang</surname>
          </string-name>
          , Dsanet:
          <article-title>Dual self-attention network for multivariate time series forecasting</article-title>
          ,
          <source>in: Proceedings of the 28th ACM International Conference on Information and Knowledge Management</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>2129</fpage>
          -
          <lpage>2132</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [62]
          <string-name>
            <surname>A. M. Aboussalah</surname>
            ,
            <given-names>C.-G.</given-names>
          </string-name>
          <string-name>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>Continuous control with stacked deep dynamic recurrent reinforcement learning for portfolio optimization</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>140</volume>
          (
          <year>2020</year>
          )
          <fpage>112891</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [63]
          <string-name>
            <given-names>C.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , Y. Liu, J. Liu,
          <article-title>Financial quantitative investment using convolutional neural network and deep learning technology</article-title>
          ,
          <source>Neurocomputing</source>
          <volume>390</volume>
          (
          <year>2020</year>
          )
          <fpage>384</fpage>
          -
          <lpage>390</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>