=Paper= {{Paper |id=Vol-2413/paper01 |storemode=property |title= Deep Neural Networks in Digital Economy |pdfUrl=https://ceur-ws.org/Vol-2413/paper01.pdf |volume=Vol-2413 |authors=Alexey Averkin,Sergey Yarushev }} == Deep Neural Networks in Digital Economy == https://ceur-ws.org/Vol-2413/paper01.pdf
             Deep Neural Networks in Digital Economy1

         Alexey Averkin 1,2[0000-0003-1571-3583] and Sergey Yarushev 2[0000-0003-1352-9301]
  1
      Dorodnicyn Computing Centre, FRC CSC RAS, Vavilov st. 40, 119333 M oscow, Russia
       2
         Plekhanov University of Economics, Stremyanny lane 36, M oscow, 117997, Russia
                                   yarushev.sa@rea.com



         Abstract. M odern generations of artificial neural networks are more complex
         and flexible architectures than neural networks of the first generations. Increasing
         application possibilities, improved existing artificial neural networks. Solving the
         problem of developing artificial neural networks that must solve a number of
         complex mathematical problems, problems aimed at developing the most rapid
         and flexible learning algorithms. Overview of modern research in the field of
         applied artificial intelligence and the most interesting generations of artificial
         neural networks. Of the most relevant to date neural network architectures, we
         should highlight deep neural networks, impulse neural networks and capsular
         neural networks. Deep Learning Neural Networks are based on the teaching of
         representations and not on specialized algorithms designed for specific tasks.
         M any deep learning methods were known as early as the 1980s, but the results
         were unimpressive, while advances were made in the theory of artificial neural
         networks (pre-training of neural networks using a special case of an undirected
         graphical model, the so-called limited Boltzmann machine). One of the most bi-
         ologically inspired models of artificial neural networks is the so-called impulse
         or spike neural networks, which are the third generation of neural networks.
         Pulsed neural networks or spike basically contain the principles of operation of
         biological neurons. Unlike other neural network architectures, in impulse net-
         works, neurons exchange short impulses of the same amplitude between them-
         selves, which allows achieving tremendous energy efficiency. Neuromorphic
         chips are built on the basis of pulse architectures to simulate the operation of a
         biological neural network. Capsule networks use small groups of neurons, called
         capsules, which are concentrated in layers, use spatial relationships, and can rec-
         ognize objects in video and images. When multiple capsules in the same layer
         match as a result of recognition, they activate the capsule with a layer above and
         so on until the network brings together the whole image. This approach is the
         newest in the field of artificial neural networks.

         Keywords: Deep Learning, Digital Economy, Neural Networks, Capsule Neu-
         ral Networks, Spiking neural network.




1 The reported study was funded by RFBR according to the research project № 17-07-01558


Proceedings of the XXII International Conference “Enterprise Engineering and Knowledge
M anagement” April 25-26, 2019, M oscow, Russia
2


1      Introduction

Modern generations of artificial neural networks are more complex and flexible archi-
tectures than neural networks of the first generations. Increasing application possibili-
ties, improved existing artificial neural networks. Solving the problem of developing
artificial neural networks that must solve a number of complex mathematical problems,
problems aimed at developing the most rapid and flexible learning algorithms. Over-
view of modern research in the field of applied artificial intelligence and the most in-
teresting generations of artificial neural networks. It is also necessary to pay special
attention to the state of artificial intelligence in Russia and abroad.


2      Deep Learning

Among the most actively developing models of artificial neural networks is to highlight
the deep models of neural networks (Deep Learning Neural Networks). Similar models
of neural networks have already proven their effectiveness in image recognition and
classification tasks. The most popular architecture is the so-called convolutional neural
network. Technological giants allocate billions of dollars for research and development
in the field of artificial intelligence. An example is a convolutional neural network. I
have a network developed by Google - GoogLeNet. Also, among the well-known neu-
ral networks - champions, AlexNet should be highlighted.
    Of the most relevant to date neural network architectures, we should highlight deep
neural networks, impulse neural networks and capsular neural networks that appeared
literally at the end of 2017, which literally exploded in the field of artificial neural net-
works. Therefore, we dwell on these three architectures.
    Deep Neural Networks or Deep Learning NN are a class based on machine learning
methods.
    In-depth training is based on the teaching of representations (eng. Feature / repre-
sentation learning), and not on specialized algorithms designed for specific tasks. Many
deep learning methods were known as early as the 1980s, but the results were unim-
pressive [1], while advances were made in the theory of artificial neural networks (pre-
training of neural networks using a special case of an undirected graphical model, the
so-called limited Boltzmann machine) x (above all, Nvidia GPUs, and now Google’s
tensor processors) did not allow creating complex technolo gical architectures of neural
networks with sufficient performance and allow to solve a wide range of tasks, do not
be an effective solution before, for example, in computer vision, machine translation,
speech recognition, with quality solutions, in many cases, are now comparable, and in
some cases superior to "the protein" experts. Unlike machine learning, depth learning
requires a much larger amount of training sample than in the case of machine learning.
Also, unlike machine learning, a deep neural network can have thousands of layers. All
this helps deep neural networks to achieve a sufficiently high accuracy in the tasks of
analysis, classification and image recognition. But, the main drawback of in -depth
training is the enormous resource-intensiveness; in order to train a deep neural network,
it is sometimes necessary to make a training sample of a million images, or even more,
                                                                                          3

and the learning process can take several days. For such tasks, even developed separate
GPU processors, to speed up the learning process.
   The most well known example of a deep learning network is the convolutional neural
network. This architecture of artificial neural networks, proposed by Yan Lekun in 1988
[2] and aimed at effective image recognition.




                            Fig. 1. Convolutional neural network

   Perhaps the most famous example of a convolutional neural network is the AlexNet
and GoogLeNet networks.
   The latter won the ImageNet recognition challenge [3] in 2014, with the result of
6.67% top 5 error. Let me remind you, top 5 error is a metric in which the algorithm
can produce 5 variants of a picture class and an error is counted if among all these
variants there is no correct one. In total, the test sample contains 150000 images and
1000 categories, that is, the task is extremely nontrivial.
   Today, the largest technology companies are investing billions of dollars in the de-
velopment of applied artificial intelligence technologies and the development of artifi-
cial neural network


3      Spiking Neural Networks

It is no secret to anyone that scientists engaged in developments in the field of artificial
intelligence face the task of copying the work of biological intelligence in the best way
possible, in other words, creating the most biologically inspired model o f the human
brain. One of the most biologically inspired models of artificial neural networks is the
so-called impulse or spike neural networks, which are the third generation of neural
networks. The word spike comes from the English word spike, that is, impulse. Pulsed
neural networks or spike [4] basically contain the principles of operation of biological
4

neurons. Unlike other neural network architectures, in impulse networks, neurons ex-
change short impulses of the same amplitude between themselves, which allows achiev-
ing tremendous energy efficiency.
   Neuromorphic chips are built on the basis of pulse architectures to simulate the op-
eration of a biological neural network.
The active development of a new field of neuromorphic technology is associated with
the development of principles, architectures and implementations of neurobiological
systems. Such a neuromorphic approach implies a departure from known models of
formal neural networks and attempts at software and hardware implementation of mod-
els of the functional parts of the brain and nervous system.
   The practical development of this trend is currently supported by IBM [28] in terms
of neuromorphic computing (neuromorphic computing), and in terms of hardware im-
plementations of neuromorphic technology and its use - by the US defense agency
DARPA, which in 2008 implemented the SyNAPSE project [5]. Examples of early de-
velopment of neuromorphic ASICs are: Silicon Retina (eye model) [6], Silicon Cochlea
(ear model) [7], and others. Neuromorphic technology shou ld provide for the construc-
tion of machines that have similar human perception, ability for self-organization [8],
robustness in relation to changes in the environment and the control object.
   Neurogrid (Stanford University). Analog approach to neuron modeling (106 neu-
rons, 6 · 109 synapses). The variability of a large number of parameters allows the study
of ensembles of neurons of different types. The problem is an outdated technological
base.
   SpiNNaker (University of Manchester). It is aimed at creating a neuromorphic hard-
ware platform for the implementation of the European project Human Brain Project.
The project is based on the use of special digital chips with the ability to build highly
scalable modular systems with a different topology of their connection. Each chip con-
tains 16 ARM9 processors and can emulate in real time the work of tens of thousands
of neurons. The router provides delivery within a chip of 5 · 109 spikes / second. Brain-
ScaleS (University of Heidelberg). The project aims to study and simulate the human
brain. Created a hybrid digital-analog neurochip. A system was built to simulate ANNs
from 2 · 105 neurons, 5 · 107 synapses.


4      Capsule Neural Networks

Capsule neural networks were proposed by Jeffrey Hinton, one of the researchers who
suggested teaching the neural network the method of back propagation of an error. In
2012, he proposed to teach deep neural networks using the method of back-propagation
of error, which later allowed for a breakthrough in image recognition tasks. In October
2017, he published work [9], in which he presented a new architecture of neural net-
works, called capsular neural networks. This kind of architecture can be a revolution in
the field of recognition, as they can solve the problems of all favorite convolutiona l
neural networks. The problem of convolutional neural network (CNN) is that in the
learning process for image recognition, information about the spatial relationships of
all functions is lost. CNN does not remember the position of the image elements in
                                                                                           5

space and the same object, but from a different angle it can already be perceived as
another image. Therefore, it is necessary to make a huge sample for the same object
from all angles. Capsule neural networks can use spatial relationships to solve a similar
problem.




                              Fig. 2. Capsule neural network

   This approach exceeded CNN quite significantly, reducing the number of errors by
45%. Capsule networks use small groups of neurons, called capsules, which are con-
centrated in layers, use spatial relationships, and can recognize objects in video and
images. When multiple capsules in the same layer match as a result of recognition, they
activate the capsule with a layer above and so on until the network brings together the
whole image. Each of the capsules is designed to detect a specific function of an image
in such a way that it can recognize them from different angles. This approach is the
newest in the field of artificial neural networks.


5      The State of AI Researches in Russia and USA

In order to assess the state of affairs in the field of artificial intelligence in Russia and
compare them with foreign competitors, it is advisable to compare the number of sci-
entific publications over the past year and the amount of funds allocated to the devel-
opment of the field.
   Let us single out the most ambitious Western projects aimed at the development of
artificial intelligence and teams working in this field:

1. SyNAPSE program: DARPA program for financing the development of neuromor-
   phic technologies, processors and systems that potentially scale up to a level com-
   parable to the brain size of animals, such as a mouse or a cat.
2. Stanford University: Brian A. Wandell, H.-S. Philip wong
3. Cornell University: Rajit Manohar
4. Columbia University Medical Center: Stefano Fusi
5. University of Wisconsin Madison: Giulio Tononi
6. University of California, Merced: Christopher Kello
7. IBM Research: Rajagopal Ananthanarayanan, Leland Chang, Daniel Friedman ,
   Christoph Hagleitner, Bulent Kurdi, Chung Lam, Paul Maglio, Dharmendra Modha
   (Eng.) Russian, Stuart Parkin, Bipin Rajendran, Raghavendra Singh
6

 8. Boston University: Stephen Grossberg, Gail Carpenter, Yongqiang Cao, Praveen
    Pilly
 9. Neurosciences Institute: Gerald Edelman, Einar Gall, Jason Fleischer
10. University of Michigan: Wei Lu
11. University of California, Irvine: Jeff Krichmar
12. George Mason University: Giorgio Ascoli, Alexei Samsonovich
13. Portland State University: Christof Teuscher
14. Stanford University: Mark Schnitzer
15. Set Corporation: Chris Long

DARPA is a US Department of Defense directorate that is responsible for developing
new technologies in the military. The task of DARPA is to preserve the technological
superiority of the US armed forces, to prevent the emergence for the United States of
new technical means of armed struggle, to support breakthrough research, to bridge the
gap between basic research and their use in the military sphere. In the first 5 years, the
management of the SyNAPSE project has allocated more than 106 million dollars.
Also, annually large funds are allocated for holding competitions among developers,
thereby motivating the development of new technologies.
   ARPA-E - Agency for Advanced Energy Research, USA.
   The Advanced Research Foundation is the analogue of DARPA, created to facilitate
the implementation of research and development in the interests of Russian defense and
state security, associated with a high degree of risk of achieving qualitatively new re-
sults in the military-technical, technological and socio-economic spheres.
   The Russian Science Foundation is a non-profit organization established for the pur-
pose of financial and organizational support for basic and exploratory research, the
training of scientific personnel, and the development of research teams that hold leading
positions in a specific field of science.
   The Russian Foundation for Basic Research is a self-governing state non-profit or-
ganization in the form of a federal institution administered by the Government of the
Russian Federation. The RFBR was established by Decree of the President of the Rus-
sian Federation of April 27, 1992 No. 426 “On Urgent Measures for the Preservation
of the Scientific and Technical Potential of the Russian Federation”. By order of the
Government of Russia of February 29, 2016, another state grant fund, the Russian Hu-
manitarian Fund, was attached to the RFBR.
   In general, in Russia for 10 years to support the development of about 23 billio n
rubles. For comparison, about 200 million dollars are allocated annually in the United
States (about 12 billion rubles). Obviously, the difference is enormous. Accordingly, in
order to be able to compete in this area with advanced countries, it is necessary to in-
crease the amount of funds invested in the development of artificial intelligence. It is
advisable to introduce special courses of artificial intelligence in universities and to
develop the potential of potential developers at the university stage. Conduct seminars
and lectures with the participation of leading scientists and businessmen working with
real tasks in the field of artificial intelligence and, in particular, artificial neural net-
works.
                                                                                              7


References
1. Bottou L. et al. Comparison of classifier methods: a case study in handwritten digit recog-
   nition //Pattern Recognition, 1994. Vol. 2-Conference B: Computer Vision & Image Pro-
   cessing., Proceedings of the 12th IAPR International. Conference on, vol. 2, 77-82, IEEE,
   (1994).
2. . LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard and L. D. Jackel:
   Backpropagation Applied to Handwritten Zip Code Recognition, Neural Computation,
   1(4):541-551, Winter (1989)
3. Russakovsky O. et al. Imagenet large scale visual recognition challenge //International Jour-
   nal of Computer Vision, vol.115, №. 3, 211-252, (2015).
4. Pilato, Giovanni, Sergey A. Yarushev, and Alexey N. Averkin. "Prediction and Detection of
   User Emotions Based on Neuro-Fuzzy Neural Networks in Social Networks." International
   Conference on Intelligent Information Technologies for Industry. Springer, Cham, (2018).
5. M erolla, P.A. A million spiking-neuron integrated circuit with a scalable communication
   network and interface / P.A. M erolla et al. // Science 08 Aug 2014, Vol. 345, Issue 6197,
   pp. 668-673, (2014).
6. Zaghloul, K.A. A silicon retina that reproduces signals in the optic nerve / K.A. Zaghloul,
   K. Boahen // Journal of neural engineering, Vol 3, P. 257267, (2006).
7. Wen, B. A Silicon Cochlea with Active Coupling / B. Wen, K. Boahen // IEEE Transactions
   on biomedical circuits and systems, December 2009, Vol. 3, No. 6, pp. 444-455, (2009).
8. Ranganathan, A. et al. Self-Organization in Artificial Intelligence and the Brain // Georgia
   Tech online. URL: http://www.cc.gatech.edu/people/home/zkira/research.html/, last ac-
   cessed 2019/04/1
9. Sabour S., Frosst N., Hinton G. E. Dynamic routing between capsules //Advances in Neural
   Information Processing Systems, pp. 3859-3869, (2017).