<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The hybrid classifier for the task of career guidance testing</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>I.S. Tarasova</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>V.V. Andreev</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>R.М. Ainbinder</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>D.V. Toskin</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>@list.ru</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>vyach.andreev@mail.ru</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>romain@inbox.ru</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>toskin.dv@gmail.com</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Nizhny Novgorod State University of Architecture and Civil Engineering</institution>
          ,
          <addr-line>Nizhny Novgorod</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Nizhny Novgorod state technical university n. a. R. E. Alekseev</institution>
          ,
          <addr-line>Nizhniy Novgorod</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Career guidance testing assumes the presence of several types of individuals, and in the case when the output data of testing are images that characterize certain qualities of the subjects - several types (classes) of images. The image classes includes of selected searchable elements that determine whether an image belongs to a particular selected type (class). There is a ColourUnique M software module that allows you to automate the process of testing and saving test forms. The functions of the classifier are still performed by an expert (teacher or psychologist), which implies errors in evaluating the result due to individual characteristics of human perception, which can negatively affect the reliability of the classification. The paper considers two algorithms for evaluating images (the made test forms), one of which is a neural network, and the second is a filtering algorithm with hard - defined areas for determining the desired elements. During the implementing of these algorithms, a number of problems arose. The classifier is created in order to improve the accuracy of classification, both in comparison with expert assessment and with the first experimental data obtained. For achieve of the most reliable classification results, the authors consider the possibility of implementing a hybrid classifier for career guidance tasks.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Today, there are many different methods for
recognizing the desired objects in an image. The choice is
determined by the features of the object and the goals of
the recognition process set by the developer. Properties of
the desired object are often set without strict
mathematical parameters. In this case, you need to
formulate the properties of the desired object (or objects)
and develop a stable method for its (their) detection. To
solve this problem, it is necessary to find, generalize and
formulate empirical observations in mathematical terms.
In other words, formalize the parameters of the desired
object [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>When the entire object classes are searched for, where
parameter sets differ, or not all parameters are found in
all classes, formalization leads to a decrease in the
accuracy of the estimation. And if we are talking about
such an area of activity as career guidance, then the error
with determining the object class will be critical.</p>
      <p>This paper offers a solution to the problem of
inaccuracy in defining one of the classes in the process of
implementing a neural network algorithm. the authors
develop an alternative filtering algorithm with the
possibility of implementing a hybrid classifier. The goal
is to consider both algorithms and compare their
effectiveness. based on the obtained data, we can create a
combined method for evaluating and classifying images,
which contains a neural network algorithm and a filtering
algorithm.</p>
    </sec>
    <sec id="sec-2">
      <title>The types of individuals and classes of the objects</title>
      <p>
        The classification of individuals proposed for
automation is similar to the Holland classification [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In
the course of testing several groups of subjects, the
mutual complement of the final results was revealed,
which facilitates interpretation and explains a number of
points that cause contradictions in interpretation. For
example, the presence of dominant artistic and realistic
trends in color corresponds to the profile of the type
«rational» according to the testing method «Associative
color space»©. According to the method of testing
«Associative color space»©, the desired types are only 6:
the A type («skeptical»), B type («moderately
avantgarde»), C type («skeptical-creative»), D type
(«rational»), E type («creative»), F type («radically
avant-garde»). The testing is performed by the
ColourUnique M program, which is the first module of
the ColouruUnique Pro career guidance software package
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>Since the F type is the only one that uses the «cut»
tool when working with the test form (Fig. 1), it is
determined already in the process of working with the
test form and does not pass processing through the neural
network algorithm and filtering algorithm. This leaves 5
types of individuals to classify.</p>
      <p>
        The other A, B, C, D, and E types differ mainly in the
distribution of dark and light pixels within the planigon
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The complexity of determining the evaluation criteria
consists in the peculiarities of color distribution.
      </p>
      <p>At the moment, the object of research and testing of
algorithms are scans of the made test forms to avoid
distortion of cells (Fig. 2). For the user, the test form has
the form of a quasi-space (the effect of perceptual
distortion), but in reality it is a matrix.</p>
      <p>The A type characterized by a predominant (50% or
more, respectively 100% – pure type) presence of
elements «horizontal lines» (Fig. 3). Lines (from 3 cells
in length) can be either the colors of exactly the same
coordinates, or the colors of different coordinates, but
related to the same tone, for example, «green», as in
Fig.3c.
assumed that there will be fewer bands of colors from
different coordinates, but there will be less than one tone.</p>
      <p>Another condition for the presence of the A type is a
small number of shades, namely-no more than one tone
with a number of shades of 5 or more.</p>
      <p>There are the following signs of dominant A type:
1. 50% or more of the scan is filled with the desired
«horizontal line» elements (from three cells of the same
color or tone);</p>
      <p>2. A small number of shades, namely no more than
one tone with a number of shades of 5 or more;
3. The fewer shades applied by the recipient, the more
pronounced type considered.</p>
      <p>The B type characterized by the presence (3 elements
per scan or more) of the «vertical lines» elements (from
three cells in height) with identical coordinates (Fig. 4c,
4d). The required elements have at least 3 cells in height,
they can be up to 6, respectively, the definition area can
reach 6x1 (Fig. 4e).</p>
      <p>The total number of shades can be very different, the
main thing – 4 times or more the presence of an element
from 3 cells in height of exactly the same color (Fig. 4e).</p>
      <p>There are the following signs of dominant B type:
1. The presence of 3 or more elements of the «vertical
lines».</p>
      <p>The C type characterized by the presence of the
elements «horizontal lines» (Fig. 5c) and «wide» or
«narrow» gradients in the definition areas 3×3 and 2×3
(for wide) and 3×1,4×1, 5×1, 6×1 (for narrow ones) (Fig.
5d), the line elements (from 3 cells in length) can consist
of colors of exactly the same coordinates, or colors of
different coordinates, but related to the same tone (Fig.
4d, 4e, 5d, 5e).</p>
      <p>Tone – is a range of colors that are indicated in the
color circle as «yellow», «red», «orange», and so on. It is
also the tone that gives the color its name.</p>
      <p>In the current selection, the A type elements are
mostly represented by colors of the same coordinates.it is</p>
      <p>The «wide gradients» – it’s a bands of colors with
exactly the same coordinates in width and colors of the
same tone range in length, for example, green and orange
gradients in the definition areas 3×3 and 2×3 in figure 5d.</p>
      <p>The total number of shades may vary, but there are
generally 4-5 shades for 2-3 tones. For example, in Fig. 5
d, the scan shows from 3 shades for red, green, purple
and orange tones.
There are the following signs of dominant C type:
1. Simultaneous presence of elements of «wide» or
«narrow» gradients and «lines»;</p>
      <p>2. The minimum number of elements for determining
the type of 2 lines and one wide gradient 3×3 and 2×3, or
2 lines and two narrow gradients;
3. Relatively high variety and number of shades.</p>
      <p>The D type characterized by the presence of elements
«chess» or « chess-like» (Fig. 6c, Fig. 7). Chess and
chess-like elements are arranged by a special
arrangement of dark and light cells in the area of
definition 2×2. Moreover, chess elements can contain any
color.</p>
      <p>The total number of shades is usually small.</p>
      <p>There are the following signs of dominant D type:
1. The presence of «chess» elements on 50 or more
percent of the scan area;</p>
      <p>2. A large number of cells with repeated colors of the
same coordinates, especially black, white, yellow, red
and blue.</p>
      <p>3. Using a relatively larger number of colors from the
«basic» section of the ColourUnique M software module
(Fig. 8) compared to other types.</p>
      <p>The E type is mainly characterized by the presence of
a large number of shades of the selected colors, namely –
from 5 shades for at least two tones (Fig. 9). Аs a rule,
there are much more of them. Some representatives of
this type are distinguished by the presence of mainly
«narrow» gradients, less often «wide» (Fig. 5e, Fig. 9с).</p>
      <p>There are the following signs of dominant E type:
1. The presence of a large number of shades, namely
– from 5 shades for at least two tones;</p>
      <p>2. The predominant presence of «narrow», sometimes
«wide» gradients.
the total data is cut off (15% goes for validation, 15%
for testing). There are 3 folders with data for training,
validation and testing in each folder, images are divided
into classes (types).</p>
      <p>
        Previously untrained neural networks were used to
classify individual types using the «Associative color
space» © testing method. Three architectures were used:
MobileNet [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], Inception_v3 [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], and Unet [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>Training process. The training consists of periods.
At the
moment,
the
most
accurate
network
is
Inception_v3, showing a result of about 65%, trained on
40 epochs. The number of epochs is set manually based
on the results. However, too many epochs can lead to
retraining (when the network recognizes local features
rather than General ones, for example, it is linked to the
color, presence or absence of a particular detail, etc.).</p>
      <p>During the training process, the network counts the
error at each step, then changes the weights to reduce the
number of errors. For this purpose, the stochastic gradient
descent (SGD) optimizer is used, since at the moment,
when working with the network for career guidance, it
has proved to be the most effective.</p>
      <p>
        Both
statistical estimation
and
machine learning
consider the problem of minimizing an objective function
that has the form of a sum:
where the parameter w that minimizes Q(w) should be
evaluated [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>The neural network diagram for classifying images
(ready-made test forms) is shown in
Look at the graphs of accuracy and loss on training
and validation data (Fig. 11). As can be seen from the
graph in Fig. 11a, there was a discrepancy in accuracy in
validation and training around the 12th epoch.</p>
      <p>The network began to retrain and Fig. 11b shows that
the network is being it.</p>
      <p>To improve the network performance and further
experiments with sample expansion, you will need to
change the optimizer, create new classification layers,
and possibly reduce the step of weight changes, since this
form of the validation curve may indicate that the values
change too much at each epoch.</p>
      <p>Validation process. The training sample is divided
into iterations. The network looks at a certain number of
images per iteration and selects weights. Then it
evaluates the error. Then, with the same weights, the
network passes to the validation sample (15% of the data
from the training sample) and applies them to images in
the validation sample.</p>
      <p>Reducing the error both in training and validation
indicates the success of network training. If the error in
training falls, but increases in validation, it means that
there is a re-training.</p>
      <p>Testing process. The networks are tested on data that
they haven't seen before (15% of the training sample). In
other words, the network does not know which class is on
a particular image and tries to determine it independently.
The received responses are collected and the percentage
of correct definition is output.</p>
      <p>
        Library. The Keras library [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] was used to build
neural networks themselves, and the OpenCV computer
vision library was used to search for duplicates [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>Augmentation process. For the first test of the neural
network, images were used, divided by type (class). A-1,
A-2... A-70; B-1, B-2 ... B-70, and so on up to E-70. The
total amount of: 350 images. Due to the fact that there
were not enough images for training, it was decided to
apply augmentation. Due to the specifics of the task,
some augmentation methods, such as stretching, rotations
by degrees other than 180, warping, fnd so on were not
used.</p>
      <p>The methods of augmentation, used in the problem,
presented on Fig. 12, there are:
1. 180° rotation (for such images, 180 is added to the
title);
2. Vertical reflection (added to the top_bottom name
for such images);
3. Horizontal reflection (added to the left_right name
for such images).</p>
      <p>The results. Consider the performance of the neural
network for each of the desired types (classes) of images
at the end of the first experiment. In the experimental test
sample, the network showed the highest accuracy in
classes B, C, E, and the lowest-D (Fig. 13).</p>
      <p>In this sample, the network was presented with both
bright and mixed samples, which contained signs of
several types. First, the sample was classified as an
expert, then as a neural network classifier. And, after
repeated expert classification, the expert found that in
some cases the correct answer was still given by the
neural network (Fig. 13).</p>
      <p>This fact is explained by individual features of human
visual perception.</p>
      <p>However, the network currently shows a low degree
of confidence in the result in class D (D type).</p>
    </sec>
    <sec id="sec-3">
      <title>4. The filtering algorithm</title>
      <p>The filtering algorithm is developed in parallel with
the neural network algorithm.in the future, both
classifiers will pass comparative tests and the main
algorithm will be selected based on their results, while
the second one will be a supporting or alternative class,
where they were shown the best result.</p>
      <p>Consider a filter developed for the desired class D,
where at the moment the neural network does not classify
images reliably.</p>
      <p>The filtering process based on the principle of
comparing regions 2x2 of the planigon cell with samples
of «chess» and «chess-like» combinations consisting of
cells of 4 conditions (Fig. 14).</p>
      <p>For count the number of combinations, we used
permutation schemes with and without repetitions.
  =  ! ,</p>
      <p>
        (  +  +...+  )! ,
(   ,   , … ,   ) =   !∙  !∙…∙  !
where n – the number of elements, Pn – the number of
permutations, and P(n1,...,nk) – the number of
permutations with multiplicities n1,...,nk [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>Received:
1. 24 combinations for cases of using cells of all 4
conditions for the 2×2 definition area without
repetitions;
2. 144 combination for the use cases of the cells 3
conditions with a single repeat of one condition (0, 1,
2 or 3);
3. 24 combinations for cases where 2 condition cells
are used for the definition area 2×2 with a single
repeat of two conditions (0, 1, 2 or 3)
Finally, there is 192 combinations.</p>
      <p>However, not all of the resulting combinations are
«chess» or «chess-like». For example, in Fig. 13 it can be
seen that most combinations form "bands" in areas 1×2 of
the cell due to the fact that cells of the same States fall
into this area.</p>
      <p>Now, out of 12 combinations, there are 4, which
means only 48 for cases of using cells of 3 States with a
single repeat of one state (0, 1, 2 or 3).</p>
      <p>Translation of an image in grayscale is due to the fact
that, unlike the desired elements of other types (classes),
the elements of the desired type (class) of images D can
contain cells of all possible colors of the palette provided
by the ColourUnique M program. Since «chess» or
«chess-like» structures form alternating «dark» and
«light» cells, converting the image to grayscale simplifies
classification.</p>
      <p>
        At the moment, the main problem with implementing
this filter is selecting the correct mode for translating
images in grayscale, since colors that have similar
saturation levels but different tones merge. The output
can be a channel translation, depending on which colors
are used in each case [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
    </sec>
    <sec id="sec-4">
      <title>5. Conclusion</title>
      <p>The paper explores the possibilities of different
algorithms for creating a final classifier for career
guidance tasks using the «Associative color space» ©
testing method. The efficiency of the neural network
classifier in each of the classes is analyzed. The filtering
algorithm is considered as an alternative or second
evaluation method for creating a combined classifier,
where either the most effective classifier or both will be
used for each class of images.</p>
      <p>The purpose of the work was to evaluate the first test
results and analyze the experimental output data. This
goal was achieved.</p>
      <p>When performing the work, the following was
performed:</p>
      <p>1. The most efficient neural network algorithm for
developing the classifier;</p>
      <p>2. A filtering algorithm is proposed as a second
component for creating a final classifier, in particular for
class D images that are currently classified by the neural
network with unsatisfactory accuracy.</p>
      <p>3. The problems and errors in the implementation of
the proposed algorithms were found, and ways to
eliminate them were developed.</p>
      <p>4. The results obtained can be applied to improve the
accuracy of image classification and adjust the work of
neural network algorithms, which will increase the
accuracy of evaluating and predicting the processes of
professional guidance of an individual.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>The paper was performed with the support by RFBR,
Grant № 19-07-00455.</p>
      <p>Tarasova Iuliia S., PhD of Sciences degree seeking
applicant, senior lecturer of Department of Industrial design,
Nizhny Novgorod State University of Architecture and Civil
Engineering. E-mail: tar06@list.ru</p>
      <p>Andreev Vyacheslav V., Head of the Department «Nuclear
reactors and power plants», Grand PhD of Sciences in
technology, associate professor, Nizhny Novgorod state
technical university n.a. R.E.Alekseev. E-mail:
vyach.andreev@mail.ru</p>
      <p>Ainbinder Roman M., PhD in Physico-mathematical
sciences, associate professor of the Department «Mathematics»,
Nizhny Novgorod State University of Architecture and Civil
Engineering; senior lecturer of Department of Information
technologies in Humanities research, Lobachevsky State
University of Nizhny Novgorod – National Research
University. Е-mail: romain@inbox.ru</p>
      <p>Toskin Denis V., master’s Degree student of IRIT Nizhny
Novgorod state technical university n. a. R. E. Alekseev.
Еmail: toskin.dv@gmail.com</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Tsvetkov</surname>
            <given-names>A. A.</given-names>
          </string-name>
          <article-title>Algorithms for object recognition / A. A</article-title>
          .
          <string-name>
            <surname>Tsvetkov</surname>
            ,
            <given-names>D. K.</given-names>
          </string-name>
          <string-name>
            <surname>Shorokh</surname>
            ,
            <given-names>M. G.</given-names>
          </string-name>
          <string-name>
            <surname>Zubareva</surname>
          </string-name>
          [et al.] / / Technical Sciences:
          <article-title>problems and prospects: materials of the IV international conference</article-title>
          . nauch. konf. - Saint
          <string-name>
            <surname>Petersburg</surname>
          </string-name>
          : Its publishing house,
          <year>2016</year>
          . - P.
          <fpage>20</fpage>
          -
          <lpage>28</lpage>
          . - URL: https://moluch.ru/conf/tech/archive/166/10825/ (accessed:
          <fpage>08</fpage>
          .
          <fpage>05</fpage>
          .
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Proforientator</surname>
          </string-name>
          .ru Career guidance tests [Electronic resource]: Website - https://proforientator.ru/tests/ (accessed 26.05.
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Tarasova</surname>
            ,
            <given-names>I.S.</given-names>
          </string-name>
          <article-title>Implementation of algorithms of image analysis in the software package ColourUniquePRO with the aim of increasing the accuracy of classification types individuals / I.S</article-title>
          . Tarasova,
          <string-name>
            <given-names>A.V.</given-names>
            <surname>Chechin</surname>
          </string-name>
          , V.V. Andreev // Computer Graphics and
          <article-title>Vision</article-title>
          .
          <source>Proceedings of the 29th International Conference on Computer Graphics and Vision</source>
          . Bryansk, Russia,
          <source>September 23-26</source>
          ,
          <year>2019</year>
          . - pp.
          <fpage>189</fpage>
          -
          <lpage>193</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Utrobin</surname>
            ,
            <given-names>V. A.</given-names>
          </string-name>
          <article-title>Computer image processing. Information models of the understanding stage: studies</article-title>
          .manual / V. A.
          <string-name>
            <surname>Utrobin</surname>
          </string-name>
          . - N. Novgorod: NSTU,
          <year>2006</year>
          . - 247 p.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Andrew</surname>
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Howard</surname>
          </string-name>
          , Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam.
          <article-title>MobileNets: Efficient Convolutional Neural Networks for Mobile VisionApplications</article-title>
          . arXiv:
          <volume>1704</volume>
          .04861v1,
          <year>2017</year>
          . - https://arxiv.org/pdf/1704.04861.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Christian</given-names>
            <surname>Szegedy</surname>
          </string-name>
          , Wei Liu, Yangqing Jia,
          <string-name>
            <given-names>Pierre</given-names>
            <surname>Sermanet</surname>
          </string-name>
          , Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke,
          <string-name>
            <given-names>Andrew</given-names>
            <surname>Rabinovich</surname>
          </string-name>
          .
          <article-title>Going deeper with convolutions</article-title>
          .
          <source>arXiv:1409.4842v1</source>
          ,
          <year>2014</year>
          . - https://arxiv.org/pdf/1409.4842.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Olaf</given-names>
            <surname>Ronneberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Philipp</given-names>
            <surname>Fischer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Thomas</given-names>
            <surname>Brox</surname>
          </string-name>
          . U-Net:
          <article-title>Convolutional Networks for Biomedical Image Segmentation</article-title>
          .
          <source>arXiv:1505.04597v1</source>
          ,
          <year>2015</year>
          . - https://arxiv.org/pdf/1505.04597.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Wikipedia</surname>
          </string-name>
          .
          <article-title>Stochastic gradient descent</article-title>
          . [Electronic resource]: Website - wikipedia.org.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <article-title>[9] Keras-Open neural network library -</article-title>
          https://keras.io/.
          <source>(accessed: 08.05</source>
          .
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Open</surname>
            <given-names>CV</given-names>
          </string-name>
          -computer vision Library - https://opencv.org/.
          <source>(date accessed: 08.05</source>
          .
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Topunov</surname>
            ,
            <given-names>V.L.</given-names>
          </string-name>
          <string-name>
            <surname>Combinatorics</surname>
          </string-name>
          . Workshop on solving problems: textbook / V.L.Topunov
          <string-name>
            <surname>; edited by</surname>
            <given-names>V.I.</given-names>
          </string-name>
          <string-name>
            <surname>Nechaev</surname>
            ,
            <given-names>V.G.</given-names>
          </string-name>
          <string-name>
            <surname>Chirsky</surname>
          </string-name>
          . - 2nd ed. - Moscow: MPSU,
          <year>2016</year>
          . - 88 p.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Andreev</surname>
            <given-names>V.V.</given-names>
          </string-name>
          <article-title>Problems and prospects of implementation of the algorithm of classification of test forms of the color Unique Pro software complex</article-title>
          / V.V.
          <string-name>
            <surname>Andreev</surname>
            ,
            <given-names>I.S.</given-names>
          </string-name>
          <string-name>
            <surname>Tarasova</surname>
            ,
            <given-names>A.V.</given-names>
          </string-name>
          <string-name>
            <surname>Chechin</surname>
          </string-name>
          / / Information systems and technologies-2020: [Electronic resource]
          <article-title>: collection of materials of the XXVI International scientific</article-title>
          and technical conference-Electron. Dan - N. Novgorod: NSTU,
          <year>2020</year>
          . - P.
          <fpage>913</fpage>
          -
          <lpage>918</lpage>
          . - URL: https://www.nntu.ru/frontend/web/ngtu/files/news/20 20/05/12/ist2020/sbornik_ist2020.pdf
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>