=Paper= {{Paper |id=Vol-2753/paper9 |storemode=property |title=Sequencing for Encoding in Neuroevolutionary Synthesis of Neural Network Models for Medical Diagnosis |pdfUrl=https://ceur-ws.org/Vol-2753/short4.pdf |volume=Vol-2753 |authors=Serhii Leoshchenko,Andrii Oliinyk,Sergey Subbotin,Tetiana Zaiko,Serhii Shylo ,Viktor Lytvyn |dblpUrl=https://dblp.org/rec/conf/iddm/LeoshchenkoOSZS20 }} ==Sequencing for Encoding in Neuroevolutionary Synthesis of Neural Network Models for Medical Diagnosis == https://ceur-ws.org/Vol-2753/short4.pdf
Sequencing for Encoding in Neuroevolutionary Synthesis of
Neural Network Models for Medical Diagnosis
Serhii Leoshchenkoa, Andrii Oliinyka, Sergey Subbotina, Tetiana Zaikoa, Serhii Shyloa and
Viktor Lytvyna
a
    National university “Zaporizhzhia polytechnic”, Zhukovskogo street 64, Zaporizhzhia, 69063, Ukraine

                 Abstract
                 Today, artificial neural networks are actively used for various medical tasks. Diagnostics is
                 one of these tasks that can be significantly optimized by using models that will be based on
                 neural networks. Neuroevolution methods are used for synthesis more adaptive models. The
                 work with such methods begins with the initialization of a population contains individuals,
                 each of which is a separate neural network. For further work with them, encoding is used,
                 that is, a certain representation of information about the neural network. The correct encoding
                 method can significantly simplify and speed up further work, which will reduce resource
                 consumption. In this paper, authors propose a new approach to information encoding during
                 neuroevolutionary synthesis, which will expand the practical use of classical methods.

                 Keywords 1
                 Medical diagnosis, neuromodels, sequencing, encoding, neuroevolution, ANN, genetic
                 operators

1. Introduction

    One of the most relevant areas of medical and biological research is the development and
implementation of intelligent systems for the diagnosis and prediction of modern human diseases [1-
3]. The basis of this kind of systems are based on various mathematical methods and algorithms.
Systems based on the mathematical apparatus of artificial neural networks (ANNs) are particularly
effective for solving problems of medical diagnostics and forecasting [2-8]. ANNs are mathematical
models, as well as their software or hardware implementations, built on the principle of organization
and functioning of biological neural networks [1-7], [9]. Each ANN consists of elements called
mathematical neurons [2-5]. A mathematical neuron receives information, assigns weight coefficients
to it, performs calculations on it, and passes it on. Connected and interacting mathematical neurons
form a neural network that can solve quite complex problems. Models based on ANNs are
successfully used for processing large amounts of data, which is typical for complex nonlinear objects
[10-13].
    Evolutionary methods are often used to synthesize such models [14]. At a high level, the idea is
very simple. Instead of relying on a fixed neural network structure, why not let it evolve through
evolutionary methods. So, as a rule, when using a neural network, a structure is selected that can work
on the basis of empirical data. But it is not known for sure whether this is the best structure that can be
used. There's no way to know for sure. Therefore, the idea of gradually synthesizing the best network
solves this problem, because the result is the most optimal structure with certain metaparameters
(neurons, connections, and weights).
    Therefore, the synthesis starts with generating a population of random topologies of ANNs. But an
important issue remains the preservation and further development of such ANN. The network

IDDM’2020: 3rd International Conference on Informatics & Data-Driven Medicine, November 19–21, 2020, Växjö, Sweden
EMAIL: sergleo.zntu@gmail.com (S. Leoshchenko); olejnikaa@gmail.com (A. Oliinyk); subbotin@zntu.edu.ua (S. Subbotin);
nika270202@gmail.com (T. Zaiko); sergey.shilo@gmail.com (S. Shylo); lytvynviktor.a@gmail.com (V. Lytvyn)
ORCID: 0000-0001-5099-5518 (S. Leoshchenko); 0000-0002-6740-6078 (A. Oliinyk); 0000-0001-5814-8268 (S. Subbotin); 0000-0003-
1800-8388 (T. Zaiko); 0000-0002-4094-6269 (S. Shylo); 0000-0003-4061-4755 (V. Lytvyn)
            ©️ 2020 Copyright for this paper by its authors.
            Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
            CEUR Workshop Proceedings (CEUR-WS.org)
encoding stage [14] is always one of the research centers of neuroevolution methods. All existing
ANN's encoding methods can be divided into direct encoding methods and indirect encoding
methods.
   Both groups of encoding methods have their advantages and disadvantages [14]. Direct coding
should not take into account the close relationship between gene composition and individual
performance, but should only prove that its coding design can contribute to effective network
evolution. And indirect coding requires developing a set of encoding and decoding rules for
transforming gene sequences and individual phenotypes, so there is a need to better understand the
genetic and evolutionary mechanisms of biology.
   The way genes of individuals are encoded is extremely efficient, and a very short sequence of
genes can control complex phenotypes of individuals. Many modern studies of indirect coding are at
the research stage, but with the continuous development of research on the mechanisms of biological
evolution, research on neuroevolutionary methods based on both directions of coding still has great
potential.
   Moreover, a number of difficulties with choosing the encoding type are associated with modern
ANN topologies. Thus, classical direct coding methods are almost impossible to apply for recurrent
ANNs (RNN) [15], where in addition to direct connections between neurons, there are also feedback
ones. When encoding deep ANNs (DNN), problems arise in both paradigms, because there is a task to
encode not just hidden neurons, but to preserve the structure – distribution by layers.
   That is why the task of developing new coding methods that would allow and simplify working
with modern ANN's topologies and possibly be less resource-intensive remains urgent.

2. Literature Review

    In classical genetics, it is usually distinguish between genotype and phenotype [16-22]. A
genotype is a genetic representation of a creature, and a phenotype is an updated physical
representation of a creature. Evolutionary algorithms always strongly reflect biology, neuroevolution
is no different in this regard.
    The coding question comes from the question of how it is wanted to represent the ANN genetically
during operation. The way we encode the ANN determines the way the method will handle key
evolutionary processes: selection, mutation, and crossover (or recombination) [16], [17]. Any
encoding falls into one of two categories, direct or indirect.
    Direct encoding operates on chromosomes that represent some linear representation of the ANN,
in which all the neurons, weights, and connections of the ANN are explicitly specified. Thus, it is
always possible to construct a one-to-one correspondence between the structural elements of the ANN
(neurons, connections, weights, etc.), i.e. the phenotype, and the corresponding sections of the
chromosome, i.e. the genotype [18].
    This method of representing the neural network is the simplest and most intuitive, and also allows
to apply the existing genetic search apparatus to the obtained chromosomes (for example, crossing
operators and mutations) [19-23]. One of the most obvious disadvantages of this encoding scheme is
the swelling of the genotype with an increase in the number of neurons and ANN's connections, and,
as a result, low efficiency due to a significant increase in the search space [22].

                                                       75    -25


                                                        -1
                                                -100               40
                                                                              Index Bit
                                                                                   Weight Encoding
                                                                                         Bit


      1 01001011     1 10001111     1 11100110     1 10000001      0 10110110    1 00101000
Figure 1: The example of direct encoding the information about ANN
    In the example at Fig.1, the Index Bit is used to indicate whether a connection exists or not.
Weight Encoding Bits to encode the scale in binary form. A number of researchers have proposed an
encoding strategy that encodes weights in real numbers, as well as certain certain mutation operators
suitable for encoding [24].
    Indirect coding uses a more biological principle: the genotype does not encode the phenotype
itself, but the rules of its construction (relatively speaking, a certain program) [16-18]. When
decoding a genotype, these rules are applied in a certain sequence (most often, recursively and, most
often, the applicability of the rules depends on the current context), as a result of which a neural
network is built.
    When using indirect coding methods, the genetic representation (and, consequently, the search
space for genetic algorithms) is more compact, and the genotype itself allows encoding modular
structures, which gives advantages in the adaptability of the results obtained under certain conditions
[18], [22-24]. Instead, we get a practical inability to track which changes in the genotype led to the
specified changes in the phenotype, as well as many difficulties with the selection of genetic
operators, convergence and productivity.
    Historically, direct coding has been investigated earlier and more deeply, but a number of
disadvantages of this approach are forcing researchers to look more closely at indirect coding
methods. However, indirect methods are inherently difficult to analyze [18]. For example, the same
mutation of a rule located at the beginning of the program has a huge effect, but applied to the end
rules – the effect should not be at all, and as a result: the genetic search has a strong tendency to
premature convergence. The selection of crossover operators is also a non-trivial task, since the use of
standard binary operators, as a rule, leads to the frequent appearance of non-viable solutions.
    There are also a number of other methods of neuroevolution. Let's look at the most popular ones
with brief descriptions of each of them [24]:
        Boers and Kuiper approach is using context-sensitive L-systems;
        Dellaert and Beer approach is a similar approach to Cangelosi and Elman, but using random
    boolean networks;
        Harp, Samad and Guha approach is zone direct encoding of the structure;
        Gruau approach is using a grammar tree to set instructions for cell division (something similar
    to Cangelossi, Parisi, and Nolfi);
        Vaario approach is cell growth is set by L-systems.
    As a result, indirect encoding is usually more compact. On the other hand, setting rules for indirect
encoding can lead to a strong bias in the search space, so it is much more difficult to create indirect
encoding without significant knowledge of how the encoding will be used.

3. Materials and methods
   It is known from the theory of genetics that sequencing of biopolymers (proteins and nucleic acids
are DNA and RNA) is the determination of their amino acid or nucleotide sequence [25], [26]. As a
result of sequencing, a formal description of the primary structure of a linear macromolecule is
obtained in the form of a sequence of monomers in text form. The size of sequenced sections of DNA
usually does not exceed 100 pairs of nucleotides (next-generation sequencing) and 1000 pairs of
nucleotides during Sanger sequencing. As a result of sequencing overlapping sections of DNA,
sequences of sections of genes, whole genes, total mRNA, or complete genomes of organisms are
obtained [25], [26].
Figure 2: The process of DNA translation

   A proposed method of encoding information about the ANN is proposed to be organized based on
a similar principle [25-28]. Begin from the coding of connections: in the genotype of an individual,
information is provided about the weights of interneuronal connections of the neuromodel, but each
gene will contain information about the indices of the initial and final communication neuron, as well
as weight of connection. In the case when the method works with RNN, an additional cell is added
with the feedback weight, its index is determined by the index of the output neuron.
   Some rules for indexing neurons must be introduced [29-33]:
   1) since the number of inputs and outputs of the network is a fixed value, the indices of the
corresponding neurons are constant and take values in the interval for input neurons 0; Ni  1 , and
Ni ; Ni  No  1 for output, where Ni and N o are the number of inputs and outputs of the network,
respectively. Deleting input and output neurons is impossible;
    2) new neurons that appear as a result of mutations get the minimum possible index. For example,
if an individual represents a network with three inputs and three outputs and does not contain hidden
neurons, then the new neuron in this network will be assigned the index "5", the next one – "6", and
so on;
    3) indices of neurons in a network that is represented by an individual cannot contain missing
values, that is, there can be no ANN with neurons that have, for example, indices N 0, N1 , N 2 , N5 , N6 .
If such a case occurs, for example, after removing a neuron with index 4 from the network, the indices
of the neurons that remain are adjusted in this way: N5  N 4 , N 6  N5 , while changing the data in
the connections related to these neurons.
    So it is getting a two-line list, where every four cells (two rows each) form and store information
about the neuron. Fig. 3 shows an example.
      gen0                   gen1                  gen2             gen3            gen4                 gen5            gen6



  0          3           1          3          2          3     0          4    1          4         2          1    3          5                  0                      0.37
                                                                                                                                                             0.74
 0.74        0          -1.2        0       -0.08         0    0.61        0   -0.93       0     1.32           0   1.52    0.37                        0.61         3
                                                                                                                                                                         1.52
                                                                                                                                                        -1.2
                                                                                                                                                   1                             5
                                                                                                                                                       -0.93
                  st                         nd
               1 neuron                    2 neuron
                                                                                                                                                       -0.08         4
             of connection                of connection
                                                                                                                                                             1.32
                  weight of             weight of feedback
                                                                                                                                                   2
                 connection             connection (for 1st)

Figure 3: Example of ANN encoding

   In addition, it should be noted that the second rule provides a certain ordering of layers:
sequencing of orders in lists. In the example on Fig.4 Polymer can be represent as node or component
of ANN, but in the case of talking about sequencing it is called polymer.
                    1st neuron                        2nd neuron
                  of connection                      of connection

                        weight of                  weight of feedback
                       connection                  connection (for 1st)




        Polymer                         Polymer                     Polymer                    ...                   Polymer          ...         Polymer            Polymer




                                             Input                                                                       Hidden                             Output
Figure 4: Sequencing of the ANN’s layers

4. Experimental research and analysis of results
    It should be noted that the most similar method of encoding information in neuroevolution
synthesis is the method of Neuroevolution of augmenting topologies (NEAT) [20]. Therefore, in
further experimental studies, we will compare the work of the proposed encoding method with NEAT.
In addition to modifications with encoding information about an individual, we will use the usual
stages of the genetic algorithm for the synthesis of ANN: mutation, rank selection, and dot crossover.
The following types of mutation will be used [20-24], [29-33]:
    1. adding a hidden neuron;
    2. deleting a randomly selected hidden neuron;
    3. adding a connection;
    4. deleting a randomly selected connection;
    5. changing the weight of a randomly selected connection by a random value from the range [-
    0.5; 0.5].
    The choosing the type of mutation based on the values of criteria that characterize the complexity
of the problem and the ANN [34-37].
    A data set was selected for testing based on the characteristics of patients with pneumonia, which
was recently presented by authors M.-A. Kim, J. Seok Park,C. W. Lee, and W.-I. Choi [38]. Total
sample size: 77490 values. Table 1 shows the characteristics of the set date.

Table 1
General characteristics of the data set
   Total number of values               77490                                                                              Number of attributes                  54
    The type of the data           Numeric (after                                                                          Number of instances'                 1435
                                 consideration of)
   For this task, the development of neuromodels will make it much easier to determine the further
diagnosis of a person after collecting data on their well-being. Given that pneumonia is one of the
most important signs and complications of COVID-19 [39], [40], after additional training on
advanced data, this model can be used to diagnose patients or predict the further development of
disease dynamics.
   Table 2 compares the results of model synthesis using two methods.

Table 2
Test results
          Method                     Evaluations                Iterations              Population Size
           NEAT                        14265                        97                       300
           MGA                         13949                        74                       300
    The results show that with the same population sizes, the number of iterations and calculations
decreased significantly. This may indicate that using the proposed encoding method requires fewer
calculations when using genetic operators at the mutation and crossover stage.
    In NEAT, connections and weights are encoded directly, and the individual cross is realized by a
unique connection mark. The network structure of a population can be effectively developed in an
iterative process. Although NEAT is a highly effective neuroevolutionary method, it must encode a
large amount of specific information to ensure that information is passed between generations. The
encoding method developed in this paper is based on polymer sequencing and uses a more compact
encoding and decoding strategy. It can effectively develop the individual structure and weight of a
population by simply encoding a small amount of information.
    In the proposed method, different people can evolve into different structures, and each structure
represents a space of different dimensions. In addition, a simple genetic operator forces the method to
try to solve the problem in various ways. The space of modifications (mutations) is very large, and
this is beneficial for finding the optimal solution, which further reflects its advantages.

5. Further work

    Further involvement and use of probabilistic data structures is quite interesting. Work [41]
suggests using a modification of the Scalable Bloom filters. However, this approach does not allow
further encoding of feedbacks, and using this approach during mutation with the removal of certain
neurons requires the introduction of an additional matrix – a second bloom filter for counting in
reverse order. Therefore, the number of calculations increases. Given this, it is more interesting to use
the structure: Count-min sketch [42]. CM sketch is a probabilistic data structure that serves as a table
of event frequencies in a data stream. It uses hash functions to map events to frequencies, but unlike a
hash table, it only uses sublinear space, at the expense of recalculating some events due to collisions.
    The use of probabilistic data structures makes it possible to encode information much more
compactly. However, when working with such structures, we should not forget about their
probabilistic nature. This means that for every hundredth case, a false positive is possible (there is no
plural element, but the data structure reports that there is one). When it comes to tasks related to
human security, even this percentage of error probability is not acceptable. This is why the use of
probabilistic data structures can help encode the ANN more compactly during synthesis, but it is not
allowed during use in medical diagnostics.

6. Conclusions
   During the work, a new approach to the coding stage in the neuroevolutionary synthesis of ANNs
was proposed. The new approach makes it possible to use it without significant modifications in the
synthesis of modern ANNs topologies: RNN and DNN. An experimental study confirmed the
effectiveness of the method in comparison with its closest competitor, the NEAT method. The best
results can be explained by a more compact scheme for encoding information about ANN, which
makes it easier to use genetic operators in the future.
   During the work, the idea of further development of the proposed approach appeared. However,
the proposed technology is based on probabilistic data structures that differ in their probabilistic
nature, which is why they cannot be used when working with medical data that require high accuracy.

7. Acknowledgements
   The work is supported by the state budget scientific research project of National University
"Zaporizhzhia Polytechnic" “Intelligent methods and software for diagnostics and non-destructive
quality control of military and civilian applications” (state registration number 0119U100360).

8. References

[1] J.H. Kamdar, J. Jeba Praba, J.J. Georrge, Artificial Intelligence in Medical Diagnosis: Methods,
     Algorithms and Applications. In: Jain V., Chatterjee J. (eds) Machine Learning with Health Care
     Perspective. Learning and Analytics in Intelligent Systems, vol 13. Springer, Cham, 2020. doi:
     10.1007/978-3-030-40850-3_2
[2] M. Brunn, A. Diefenbacher, P. Courtet, et al., The Future is Knocking: How Artificial
     Intelligence Will Fundamentally Change Psychiatry, Acad Psychiatry 44 (2020) 461–466. doi:
     10.1007/s40596-020-01243-8
[3] L. Pantanowitz, G. MQuiroga-Garza, L, Bien et al., An artificial intelligence algorithm for
     prostate cancer diagnosis in whole slide images of core needle biopsies: a blinded clinical
     validation and deployment study, The Lancet Digital Health, vol. 2(8) (2020) 407-416. doi:
     10.1016/S2589-7500(20)30159-X
[4] Artificial intelligence identifies prostate cancer with near-perfect accuracy, University of
     Pittsburgh, 2020. URL: https://www.eurekalert.org/pub_releases/2020-07/uop-aii072420.php
[5] E. Topol, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again,
     Basic Books, New York, 2019.
[6] L. Lu, X. Wang, G. Carneiro, L. Yang, Deep Learning and Convolutional Neural Networks for
     Medical Imaging and Clinical Informatics (Advances in Computer Vision and Pattern
     Recognition), Springer, Berlin, 2019.
[7] L. Morra, S. Delsanto, L. Correale, Artificial Intelligence in Medical Imaging: From Theory to
     Clinical Practice, CRC Press, Florida, 2019.
[8] J.A.J. Alsayaydeh, W.A.Y. Khang, A.K.M.Z Hossain, V. Shkarupylo, J. Pusppanathan, The
     experimental studies of the automatic control methods of magnetic separators performance by
     magnetic product, ARPN Journal of Engineering and Applied Sciences, vol. 15(7) (2020) 922-
     927.
[9] J.A.J. Alsayaydeh, W.A. Indra, W.A.Y. Khang, V. Shkarupylo, D.A.P.P. Jkatisan, Development
     of vehicle ignition using fingerprint, ARPN Journal of Engineering and Applied Sciences, vol.
     14(23) (2019) 4045-4053.
[10] J.A.J. Alsayaydeh, W.A.Y. Khang, W.A. Indra, J.B. Pusppanathan, V. Shkarupylo, A.K.M. Zakir
     Hossain, S. Saravanan, Development of vehicle door security using smart tag and fingerprint
     system, ARPN Journal of Engineering and Applied Sciences, vol. 9(1) (2019) 3108-3114.
[11] J.A.J. Alsayaydeh, W.A.Y. Khang, W.A. Indra, V. Shkarupylo, J. Jayasundar, Development of
     smart dustbin by using apps, ARPN Journal of Engineering and Applied Sciences, vol. 14(21)
     (2019) 3703-3711.
[12] J.A. Alsayaydeh, V. Shkarupylo, M. S. Hamid, S. Skrupsky, A. Oliinyk Stratified Model of the
     Internet of Things Infrastructure, Journal of Engineering and Applied Sciences, vol. 13(20)
     (2018) 8634-8638.
[13] J.A. Alsayaydeh, M. Nj, S.N. Syed, A.W. Yoon, W.A. Indra, V. Shkarupylo, C. Pellipus, Homes
     appliances control using bluetooth, ARPN Journal of Engineering and Applied Sciences, vol.
     14(19) (2019) 3344-3357.
[14] I. Omelianenko, Hands-On Neuroevolution with Python: Build high-performing artificial neural
     network architectures using neuroevolution-based algorithms, Packt Publishing, Birmingham,
     2019.
[15] S. Leoshchenko, A. Oliinyk, S. Subbotin, T. Zaiko, Using Recurrent Neural Networks for Data-
     Centric Business. In: Ageyev D., Radivilova T., Kryvinska N. (eds) Data-Centric Business and
     Applications. Lecture Notes on Data Engineering and Communications Technologies, vol 42
     (2020), 73-91. doi: 10.1007/978-3-030-35649-1_4
[16] A. Bergel, Agile Artificial Intelligence in Pharo: Implementing Neural Networks, Genetic
     Algorithms, and Neuroevolution, Apress, New York, 2020.
[17] G. Blokdyk, Neuroevolution of augmenting topologies: Second Edition, 5STARCooks, Fort
     Wayne, 2018.
[18] G. Blokdyk, Neuroevolution of augmenting topologies: A Complete Guide, CreateSpace
     Independent Publishing Platform, California, 2017.
[19] I. Izonin, R. Tkachenko, N. Kryvinska, P. Tkachenko, M. ml. Greguš Multiple Linear Regression
     Based on Coefficients Identification Using Non-iterative SGTM Neural-like Structure. In: Rojas
     I., Joya G., Catala A. (eds) Advances in Computational Intelligence. IWANN 2019. Lecture
     Notes in Computer Science, vol 11506 (2019) 467-479. doi: 10.1007/978-3-030-20521-8_39
[20] I. Izonin, N. Kryvinska, R. Tkachenko, K. Zub, An approach towards missing data recovery
     within IoT smart system. 16th International Conference on Mobile Systems and Pervasive
     Computing, MobiSPC 2019, 14th International Conference on Future Networks and
     Communications, FNC 2019, 9th International Conference on Sustainable Energy Information
     Technology, SEIT 2019, Elsevier B.V., 2019, p. 11-18. doi: 10.1016/j.procs.2019.08.006
[21] I. Izonin, M. ml. Greguš, R. Tkachenko, M. Logoyda, O. Mishchuk, Y. Kynash, SGD-Based
     Wiener Polynomial Approximation for Missing Data Recovery in Air Pollution Monitoring
     Dataset, 15th International Work-Conference on Artificial Neural Networks, IWANN 2019,
     Springer Verlag, 2019, p. 781-793. doi: 10.1007/978-3-030-20521-8_64
[22] J.H. Holland, Signals and Boundaries: Building Blocks for Complex Adaptive Systems, MIT
     Press, Cambridge, 2014.
[23] G.M. Khan, Evolution of Artificial Neural Development: In search of learning genes, Springer,
     Berlin, 2018.
[24] A. Brabazon, M. O'Neill, S. McGarraghy, Natural Computing Algorithms (Natural Computing
     Series), Springer, Berlin, 2015.
[25] B.A. Pierce, Genetics: A Conceptual Approach, W.H. Freeman & Co. Ltd., New York, 2019.
[26] R. Brooker, Genetics: Analysis and Principles, McGraw-Hill Education, New York, 2020.
[27] I. Fedorchenko, A. Oliinyk, A. Stepanenko, T. Zaiko, S. Shylo, A. Svyrydenko, Development of
     the modified methods to train a neural network to solve the task on recognition of road users,
     Eastern-European Journal of Enterprise Technologies,vol. 2(9-98) (2019) 46-55. doi:
     10.15587/1729-4061.2019.164789
[28] A.O. Oliinyk, T.A. Zaiko, S.A. Subbotin, Factor analysis of transaction data bases, Automatic
     Control and Computer Sciences, vol. 48 (2014) 87-96. doi: 10.3103/S0146411614020060
[29] X.-S. Yang, X.-S. He, Nature-Inspired Computation in Data Mining and Machine Learning
     (Studies in Computational Intelligence), Springer, Berlin, 2019.
[30] A.A. Freitas, Data Mining and Knowledge Discovery with Evolutionary Algorithms, Springer,
     Berlin, 2002.
[31] E. Wertheim, Machine Learning: Evolutionary Algorithms, CreateSpace Independent Publishing
     Platform, California, 2016.
[32] X. Yang, S. Deng, M. Ji, J. Zhao, W. Zheng, Neural Network Evolving Algorithm Based on the
     Triplet Codon Encoding Method, Genes (Basel), vol. 9(12) (2018). URL:
     https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6315701/
[33] S. Risi, K.O. Stanley, Indirectly Encoding Neural Plasticity as a Pattern of Local Rules. In:
     Doncieux S., Girard B., Guillot A., Hallam J., Meyer JA., Mouret JB. (eds) From Animals to
     Animats 11. SAB 2010. Lecture Notes in Computer Science, vol 6226. Springer, Berlin,
     Heidelberg. doi: 10.1007/978-3-642-15193-4_50
[34] S. Leoshchenko, A. Oliinyk, S. Subbotin, N. Gorobii, T. Zaiko, Synthesis of artificial neural
     networks using a modified genetic algorithm, 1st International Workshop on Informatics & Data-
     Driven Medicine, IDDM 2018, CEUR-WS, 2018, pp. 1-13. DBLP Key:
     conf/iddm/PerovaBSKR18
[35] S. Leoshchenko, A. Oliinyk, S. Subbotin, S. Shylo, V. Shkarupylo, Method of Artificial Neural
     Network Synthesis for Using in Integrated CAD 15th International Conference on the Experience
     of Designing and Application of CAD Systems, CADSM’19, IEEE, 2019, pp. 1-6. doi:
     10.1109/CADSM.2019.8779248
[36] S. Leoshchenko, A. Oliinyk, S. Skrupsky, S. Subbotin, N. Gorobii, V. Shkarupylo, Modification
     of the Genetic Method for Neuroevolution Synthesis of Neural Network Models for Medical
     Diagnosis, Second International Workshop on Computer Modeling and Intelligent Systems,
     CMIS-2019, CEUR-WS, 2019, pp. 143-158. DBLP Key: conf/cmis/LeoshchenkoOSGS19
[37] S. Leoshchenko, A. Oliinyk, S. Subbotin, T. Zaiko, N. Gorobii, Implementation of selective
     pressure mechanism to optimize memory consumption in the synthesis of neuromodels for
     medical diagnostics, 2nd International Workshop on Informatics & Data-Driven Medicine,
     IDDM 2019, CEUR-WS, 2019, pp. 109-120. DBLP Key: conf/iddm/LeoshchenkoOSZG19
[38] M.-A. Kim, J. Seok Park,C. W. Lee, W.-I. Choi, Pneumonia severity index in viral community
     acquired pneumonia in adults, PLoS One, vol. 14(3) (2019). doi: 10.1371/journal.pone.0210102
[39] G. Raghu, K. C Wilson, COVID-19 interstitial pneumonia: monitoring the clinical course in
     survivors, The Lancet Respiratory Medicine, vol. 8(9) (2020) 839-842. doi: 10.1016/S2213-
     2600(20)30349-0
[40] C. Hani, N.H. Trieu, I. Saab, S. Dangeard, S. Bennani, G.Chassagnon, M.-P. Revel, COVID-19
     pneumonia: A review of typical CT findings and differential diagnosis, Diagnostic and
     Interventional Imaging, vol. 101(5) (2020), 263-268. doi: 10.1016/j.diii.2020.03.014
[41] B. Reagen, U. Gupta, R. Adolf, M. Mitzenmacher, A. Rush, G.-Y. Wei, D. Brooks, Weightless:
     Lossy Weight Encoding For Deep Neural Network Compression, Computer Science,
     Mathematics, 2017. URL: http://proceedings.mlr.press/v80/reagan18a/reagan18a.pdf
[42] G. Cormode, Count-Min Sketch, Encyclopedia of Database Systems, Springer, 2009, 511-516.