<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Deep Visual Features Matching Method for Vehicle Model Re-Identi cation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Nikolay Nemcev</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>] nicknemcev@gmail.com</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Elena Vasilenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>]apfelrobbe@gmail.com</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ITMO University</institution>
          ,
          <addr-line>Saint Petersburg, 197101, Russian Federation</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this paper, a study of the existing methods for identifying and comparing the features of objects used in the task of a vehicle model re-identi cation by its image was conducted, which is one of the most important tasks facing automated tra c control systems, and solved by comparing the vehicle features being veri ed with a certain set of features obtained by the monitoring system earlier, and deciding whether the compared samples belong to the same vehicle model or to di erent ones. The article describes the approach for vehicle model re-identi cation according to its image, based on the method of feature vector extraction, using classi cation convolutional neural network, and on the criterion for feature vectors matching via the sub-counting corresponding features. The proposed method shows a lower computational complexity than modern analogous approaches, uses smaller feature vector, demonstrates comparable re-identi cation accuracy in scenarios when the testing data have characteristics that coincide with training ones (similar camera model and level of lighting and noise, models of re-identi able vehicles are contained in the dataset used for training) and achieves signi cantly higher relative accuracy in cases when testing data very different from the training dataset. The proposed approach is practically applicable in vehicle re-identi cation task for highly loaded tra c control systems.</p>
      </abstract>
      <kwd-group>
        <kwd>Visual data processing</kwd>
        <kwd>machine learning</kwd>
        <kwd>convolutional neural networks</kwd>
        <kwd>feature extraction</kwd>
        <kwd>feature comparison</kwd>
        <kwd>Alexnet</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>The computer vision algorithms are used to solve various tasks in the automotive
industry: from detecting vehicles, measuring its speed and counting its number
for creating environmental analysis functions for autonomous moving devices.
The task of re-identifying vehicles is one of the most important ones that are
being solved with tra c control systems, and it may be expressed through
selection of vehicle's features (both each one and some speci c group of vehicles of
Copyright c 2019 for this paper by its authors. Use permitted under Creative
Commons License Attribution 4.0 International (CC BY 4.0).
the same model) for its further comparison with some set of previously extracted
features in order to determine the conformity of the samples.</p>
      <p>
        The task of re-identifying a car is related to the task of facial re-identifying,
however, it also has its own speci cs - so in the task of identifying a car it is
necessary to ensure a reliable comparison of the car's attributes regardless of the
angle of view (front, side, back, at di erent angles), while face identi cation is
usually done from one angle (generally in full face). Also, di erent car models can
visually di er only from a certain angle (especially if car models from the same
manufacturer are compared), and images of the same car, taken from di erent
angles, may contain only few common details, which greatly complicates the task
of re-identi cation [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        Conditionally, all approaches for re-identifying objects can be divided into
ones using classic feature extraction methods [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ], and ones based on the
approaches of feature extraction using convolutional neural networks [
        <xref ref-type="bibr" rid="ref1 ref4">1, 4</xref>
        ],
meanwhile neural network (based on the use of arti cial neural networks) approaches
can be divided into ones working according to the classical scheme in which
features are determined for each compared image separately and its matching is
placed in distinct module [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], and approaches based on the use of Siamese neural
networks [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] in which two input images are processed in parallel within the same
network, and the similarity metric is calculated directly on the last layer [
        <xref ref-type="bibr" rid="ref1 ref7">1, 7</xref>
        ].
      </p>
      <p>
        Classical approaches of extracting and comparing features are hardly
practically applicable in the task of verifying a vehicle model due to the fact that in
real use cases, comparisons are often made for vehicles taken at a considerable
distance, images of which do not have a resolution high enough to highlight a
signi cant number of features [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]; the procedure of comparing images of
vehicles is being reduced to counting the number of matches between the received
descriptors [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], what does not allow you to adjust the compared objects taken
from di erent angles (sets of features obtained from di erent angles for a same
vehicle model will di er).
      </p>
      <p>
        Neural network approaches for comparing of object in an image use
classication network architecture for extracting feature vectors of objects (f.e. [
        <xref ref-type="bibr" rid="ref10 ref11">10,
11</xref>
        ]) and process its further comparison in the separate module [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] or on the last
network layer in case of siamese networks [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Main advantages of this approach
include possibility of multi-angle comparison and absence of strict resolution
requirements for the compared images. The main disadvantage is the feature
extraction module's necessity of training on corresponding data set, for
example, for correct identi cation of characteristics of a particular car model classi er
should be trained to classify the most complete set of car models, ideally
containing model wanted to be veri ed, meanwhile the visual characteristics of used
dataset should correspond to the real ones as much as possible (for reliable
identi cation of the car model in the night time, used classi er must be trained on
a data set containing night photos)[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>
        Beside that, the e ect of \over tting" is observed in all neural network
approaches, in which some model of the classi er, or of the veri er in this case,
demonstrates good results on the training data set, but signi cantly loses
accuracy on the data far di erent from the training [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Nevertheless, both feature
set extraction and feature set comparison modules based on machine learning
methods are prone to this e ect [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], which considerably reduces the universality
of such approaches.
      </p>
      <p>
        The approaches based on usage of siamese networks additionally to problems
with \over tting" have problems with network convergence at training stage
caused by the heterogeneity of the input data [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], which complicates its usage
in some cases.
      </p>
      <p>
        The proposed approach combines an approach for extracting of a short
feature vector based on the usage of a modi ed classi cation network Alexnet [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ],
trained on a specially prepared data set, and a simple similarity metric that
operates on small feature vectors and based on the principle of estimating the
number of matching features. The use of this metric is due both to the need to
reduce the computational complexity of the task and to optimize the
computational process of re-identi cation of the vehicle model for use in highly loaded
tra c control systems, and to reduce the "over tting" e ect of the machine
learning algorithms on stability the operation of a system that processes data
signi cantly di erent in their characteristics from those used in the process of
extracting features of the object training module [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>The obtained results show that, despite its simplicity, the proposed approach
demonstrates the accuracy of solving the problem of car model veri cation
comparable with other modern approaches with higher universality and less high
computational complexity.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Feature extraction model</title>
      <p>
        The modi ed network Alexnet [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] was used as feature set extraction module in
this paper.
      </p>
      <p>
        Instead of ReLU (Recti ed Linear Unit) which can be calculated as:
where x is the activation function input value, it was used RReLU (Randomized
Recti ed Linear Unit) [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] as the activation function:
      </p>
      <p>f (x) = max(0; x);
f (x) =
x;
x; oifthxerw0is;e ;
(1)
(2)
where U (l; u);l &lt; u; l; u 2 [0; 1); U (l; u) are some uniform distribution (it was
used U (3; 8) distribution, and = l+2u at the testing stage).</p>
      <p>
        Classical ReLU can break down during training process [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], for example,
large gradient passing through ReLU may lead to such an update of weights
that the neuron will never be activated again [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. If it happens, from this
moment gradient passing through this neuron will always be evaluated as 0 what
negatively a ects the e ectiveness of classi er training. The researches presented
in [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] showed that the usage of some "loss" for x &lt; 0 lets us both decrease the
possibility of neuron failing on training stage and some kind decrease "over
tting" e ect of the network due to the random nature of the parameter .
      </p>
      <p>
        In order to accelerate the convergence of the network (reduce training time)
and increase the stability of its operation it was used a standard approach at the
training stage based on the addition of special normalizing layers (BacthNorm,
batch normalization [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]).
      </p>
      <p>
        Network architecture Alexnet [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] was used because it's investigated and
has rather shallow depth (low count of hidden layers), alleviating process of
classi er training and providing comparatively to other networks architectures
low computing complication of feature extracting process.
      </p>
      <p>
        It was used StanfordCars [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] dataset train part, which contains 8,144 images
of static cars of 196 di erent models (totally dataset contains 16,185 images), for
classi er training. In purposes of increasing of classi cation accuracy, "over
tting" e ect decreasing and raising of classi er universality the source dataset was
augmented by 10 times with additional data balancing was done [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] (was
equalized amount of images of each car models up to 450 samples). The augmentation
of the original images was carried out by using a ne transformations,
perspective transformations, contrast changes, Gaussian noise, hue/saturation changes,
cropping/padding, blurring. For each image set of used transformations was
chosen randomly and parameters of each transformation was selected like values of
uniform distribution with transformation-speci c prede ned range 1.
      </p>
      <p>Classi cation network receives on its input a relevant image to get an object's
features vector and extracts a set of coe cients after the last MaxPool layer
represented as a value matrix sized by 256 6 6:</p>
      <p>Fi =</p>
      <p>P6
k=1</p>
      <p>P6m=1(Ci;k;m min(C))
max(C) min(C)
;
(3)
where Fi 2 F is en element of a result feature vector containing 256 elements,
Ci;k;m 2 C is an element of coe cient matrix extracted from a network.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Feature vector similarity criterion</title>
      <p>Used feature vector similarity criterion receives on its input two object's feature
vectors F and F0 computes some similarity criterion S 2 [0; 1]:</p>
      <p>8
S = &lt; 1</p>
      <p>10;; iofthFeir&gt;witsher or Fi0 &gt; thr; ;
J (Fi; Fi0) =</p>
      <p>I(Fi; Fi0); if jFi Fi0j &gt; 12 max(Fi; Fi0): :
0; otherwise
4</p>
    </sec>
    <sec id="sec-4">
      <title>Assessment of the e ectiveness of the proposed approach</title>
      <p>
        A comparative assessment of the e ectiveness of the proposed criteria for the
similarity of features is based on the evaluation of the Euclidean distance
between the compared vectors, an approach using the support vector method (SVM
[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]), as well as an approach based on the use of siamese neural networks [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and
other modern analogues, both on a test subset of the data set StanfordCars used
for training [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] and on examples from the data set CompCars, which contains
images of moving vehicles was captured by surveillance cameras with large
appearance variations due to the varying conditions of light, weather, tra c, etc
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        As the test data of the set StanfordCars [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], we used 5000 \positive"
examples consisting of images of cars of one model, and 5000 \negative" pairs of
images consisting of images of cars of di erent models, randomly selected from
a subset of images that were not used in the training process. As test data of
the set CompCars we used the original structure of test data from three di
culty levels, containing at each di culty level 20,000 compared pairs of images
(10,000 "positive" and "negative" examples) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Each image pair in the \easy
set" is selected from the same viewpoint, while each pair in the \medium set" is
selected from a pair of random viewpoints. Each negative pair in the \hard set"
is chosen from the same car make.
      </p>
      <p>As a quality criterion, a re-identi cation accuracy metric was used, calculated
as:</p>
      <p>Accuracy = 100</p>
      <p>T P + T N</p>
      <p>N
; %;
where T P is an amount of correctly recognized "positive" image pairs (the pair
contains images of vehicles of the same model, and the veri er evaluates them
(4)
(5)
(6)
(7)
as elements of one subset), T N is an amount of correctly recognized "negative"
pairs, N is an amount of compared images.</p>
      <p>Evaluation of the e ectiveness of the proposed approach and the approach on
the test subset of the training data set is given in Table 1, while for evaluation
of the e ectiveness of the proposed criterion and metric based on the evaluation
of the Euclidean distance, it was used a threshold selected to ensure maximum
e ciency (accuracy) of veri cation on the training data.</p>
      <p>
        Comparison presented in the Table 1 allows us to conclude that during
processing data as close as possible to training, the proposed criterion for comparing
feature vectors is somewhat inferior to the comparison method updated on
machine learning techniques (in this case, the method of support methods was used)
and surpasses the approach based on estimating the Euclidean distance between
vectors. It should also be noted that the lack of results for the siamese network
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] is due to the fact that, despite the enumeration of training hyperparameters,
it was not possible to ensure the convergence of the network on the
StanfordCars data set, which is a known problem in the process of training networks with
Siamese architecture on heterogeneous data [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        A comparative analysis of the e ectiveness of the proposed approach on the
CompCars [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] data set is given in Table. 2.
      </p>
      <p>
        Basing on the analysis of the table. 2, we can draw the following conclusions:
- the proposed metric of similarity of feature vectors allows you to maintain
the relative accuracy of veri cation when switching to a data set signi cantly
di erent from the training, while the approach for comparing features using the
support vector method signi cantly loses accuracy due to the "over tting" e ect;
- the proposed vehicle model veri cation system demonstrates on a set of data
signi cantly di erent from the training accuracy comparable to the accuracy of
similar basic algorithms (GoogleNet + SVM [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]), trained on a subset of the data
close to the test ones, however, it loses much to modern veri cation approaches
(Mixed Di + CCL [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]), the training of which was also carried out on a subset
of data similar in their characteristics to the test ones.
      </p>
      <p>
        In addition, it should be noted that the proposed approach for verifying a
car model operates with shorter feature vectors compared to GoogleNet + SVM
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] (256 for the proposed approach, 4096 for GoogleNet + SVM [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]) and does
not require a long training process for a module of similarity de ning. And also,
unlike Mixed Di + CCL [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], which belongs to the class of Siamese networks,
it allows to save the feature vector separately for further comparison without
computationally complex features extraction operations.
      </p>
      <p>It is important for the class of cyberphysical systems under consideration
to ensure the probability of completing tasks in a given time if the proposed
approach is used in real-time, which, as shown in [19{21], is achievable with
redundant calculations.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>The proposed approach for selecting and comparing features of objects from
their image is used in the task of verifying a vehicle model. Selecting is being
occured by modifying of a known arti cial neural network trained on a
specially prepared (augmented) data set. The proposed criterion for the similarity
of feature vectors is based on comparison techniques and has extremely low
computational complexity. The proposed vehicle veri cation method demonstrates
accuracy comparable to similar modern methods in those use cases when the
processed data have the same characteristics as the training ones (a similar camera
model, similar level of lighting and noise, etc.), and demonstrates higher relative
accuracy in processing data that are signi cantly distinguished from training.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Liu H</surname>
          </string-name>
          . et al.
          <article-title>Deep relative distance learning: Tell the di erence between similar vehicles //</article-title>
          <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source>
          .
          <year>2016</year>
          . P.
          <volume>2167</volume>
          {
          <fpage>2175</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Rublee</surname>
            <given-names>E.</given-names>
          </string-name>
          et al.
          <article-title>ORB: An e cient alternative to SIFT or SURF /</article-title>
          / ICCV.
          <year>2011</year>
          . V. 11. N 1. P.
          <volume>2</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Pan</surname>
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lyu</surname>
            <given-names>S.</given-names>
          </string-name>
          <article-title>Region duplication detection using image feature matching //</article-title>
          <source>IEEE Transactions on Information Forensics and Security</source>
          .
          <year>2010</year>
          . V.
          <article-title>5</article-title>
          . N 4. P.
          <volume>857</volume>
          {
          <fpage>867</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Zapletal</surname>
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Herout</surname>
            <given-names>A</given-names>
          </string-name>
          .
          <article-title>Vehicle re-identi cation for automatic video tra c surveillance //</article-title>
          <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops</source>
          .
          <year>2016</year>
          . P.
          <volume>25</volume>
          {
          <fpage>31</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Yang</surname>
            <given-names>L.</given-names>
          </string-name>
          et al.
          <article-title>A large-scale car dataset for ne-grained categorization</article-title>
          and veri - cation
          <source>// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source>
          .
          <year>2015</year>
          . P.
          <volume>3973</volume>
          {
          <fpage>3981</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Koch</surname>
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zemel</surname>
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Salakhutdinov</surname>
            <given-names>R</given-names>
          </string-name>
          .
          <article-title>Siamese neural networks for one-shot image recognition // ICML deep learning workshop</article-title>
          .
          <year>2015</year>
          . V.
          <volume>2</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7. Cheng D. et al.
          <article-title>Person re-identi cation by multi-channel parts-based cnn with improved triplet loss function //</article-title>
          <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source>
          .
          <year>2016</year>
          . P.
          <volume>1335</volume>
          {
          <fpage>1344</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Ke</surname>
            <given-names>Y.</given-names>
          </string-name>
          et al.
          <article-title>PCA-SIFT: A more distinctive representation for local image descriptors // CVPR (2</article-title>
          ).
          <year>2004</year>
          . V. 4. P.
          <volume>506</volume>
          {
          <fpage>513</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Ng</surname>
            <given-names>P.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Heniko</surname>
            <given-names>S. SIFT</given-names>
          </string-name>
          :
          <article-title>Predicting amino acid changes that a ect protein function //</article-title>
          <source>Nucleic acids research</source>
          .
          <year>2003</year>
          . V. 31. N 13. P.
          <volume>3812</volume>
          {
          <fpage>3814</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Krizhevsky</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sutskever</surname>
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hinton</surname>
            <given-names>G.E.</given-names>
          </string-name>
          <article-title>Imagenet classi cation with deep convolutional neural networks //</article-title>
          <source>Advances in neural information processing systems</source>
          .
          <year>2012</year>
          . P.
          <volume>1097</volume>
          {
          <fpage>1105</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Szegedy</surname>
            <given-names>C.</given-names>
          </string-name>
          et al.
          <source>Going deeper with convolutions // Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          .
          <year>2015</year>
          . P.
          <volume>1</volume>
          {
          <fpage>9</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Tanner M.</surname>
          </string-name>
          <article-title>A. Tools for statistical inference: observed data and data augmentation methods</article-title>
          .
          <source>Springer Science and Business Media</source>
          ,
          <year>2012</year>
          . V.
          <volume>67</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>John</surname>
            <given-names>G.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kohavi</surname>
            <given-names>R.</given-names>
          </string-name>
          , P eger K.
          <article-title>Irrelevant features and the subset selection problem // Machine Learning Proceedings 1994</article-title>
          . Morgan Kaufmann,
          <year>1994</year>
          . P.
          <volume>121</volume>
          {
          <fpage>129</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Joachims</surname>
            <given-names>T.</given-names>
          </string-name>
          <article-title>Making large-scale SVM learning practical</article-title>
          .
          <source>Technical report, SFB</source>
          <volume>475</volume>
          :
          <article-title>Komplexitatsreduktion in Multivariaten Datenstrukturen</article-title>
          , Universitat Dortmund,
          <year>1998</year>
          .
          <source>N</source>
          <year>1998</year>
          ,
          <volume>28</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Ho</surname>
          </string-name>
          er E.,
          <string-name>
            <surname>Ailon</surname>
            <given-names>N.</given-names>
          </string-name>
          <article-title>Deep metric learning using triplet network //</article-title>
          <source>International Workshop on Similarity-Based Pattern Recognition</source>
          . Springer, Cham,
          <year>2015</year>
          . P.
          <volume>84</volume>
          {
          <fpage>92</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Xu</surname>
            <given-names>B.</given-names>
          </string-name>
          et al.
          <article-title>Empirical evaluation of recti ed activations in convolutional network</article-title>
          // arXiv preprint arXiv:
          <volume>1505</volume>
          .
          <fpage>00853</fpage>
          .
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Howard</surname>
            <given-names>A.G.</given-names>
          </string-name>
          et al.
          <article-title>Mobilenets: E cient convolutional neural networks for mobile vision</article-title>
          applications // arXiv preprint arXiv:
          <volume>1704</volume>
          .
          <fpage>04861</fpage>
          .
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Krause</surname>
            <given-names>J</given-names>
          </string-name>
          . et al.
          <article-title>3d object representations for ne-grained categorization //</article-title>
          <source>Proceedings of the IEEE International Conference on Computer Vision Workshops</source>
          .
          <year>2013</year>
          . P.
          <volume>554</volume>
          {
          <fpage>561</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <given-names>A. V.</given-names>
            <surname>Bogatyrev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. A.</given-names>
            <surname>Bogatyrev</surname>
          </string-name>
          and
          <string-name>
            <given-names>S. V.</given-names>
            <surname>Bogatyrev</surname>
          </string-name>
          ,
          <article-title>"Multipath Redundant Transmission with Packet Segmentation," 2019 Wave Electronics and its Application in Information and Telecommunication Systems (WECONF), Saint-</article-title>
          <string-name>
            <surname>Petersburg</surname>
          </string-name>
          , Russia,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          . doi:
          <volume>10</volume>
          .1109/WECONF.
          <year>2019</year>
          .8840643
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <given-names>V. A.</given-names>
            <surname>Bogatyrev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. V.</given-names>
            <surname>Bogatyrev</surname>
          </string-name>
          and
          <string-name>
            <given-names>A. V.</given-names>
            <surname>Bogatyrev</surname>
          </string-name>
          ,
          <article-title>"Model and Interaction Efciency of Comput-er Nodes Based on Transfer Reservation at Multipath Routing," 2019 Wave Electronics and its Application in Information and Telecommunication Systems (WECONF), Saint-</article-title>
          <string-name>
            <surname>Petersburg</surname>
          </string-name>
          , Russia,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          . doi:
          <volume>10</volume>
          .1109/WECONF.
          <year>2019</year>
          .8840647
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Bogatyrev</surname>
            <given-names>A.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bogatyrev</surname>
            <given-names>S.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bogatyrev</surname>
            <given-names>V.A.</given-names>
          </string-name>
          <article-title>Analysis of the Timeliness of Redundant Service in the System of the Parallel-Series Connection of Nodes with Unlimited Queues//2018 Wave Electronics and its Application in Information and Telecommunication Systems (WECONF</article-title>
          ),
          <year>2018</year>
          , pp.
          <fpage>8604379</fpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>