<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>L. Chen, Y. Wang, H. Li, Enhancement of DNN-based multilabel classification by grouping
labels based on data imbalance and label correlation, Pattern Recognition</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1109/ICCV48922.2021.00009</article-id>
      <title-group>
        <article-title>Automatic Medical Concept Detection on Images: Dividing the Task into Smaller Ones</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Axel Moncloa-Muro</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Graciela Ramirez-Alonso</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fernando Martinez-Reyes</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Facultad de Ingeniería, Universidad Autónoma de Chihuahua, Circuito Universitario Campus II</institution>
          ,
          <addr-line>31125 Chihuahua</addr-line>
          ,
          <country country="MX">Mexico</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <volume>132</volume>
      <issue>2022</issue>
      <fpage>3</fpage>
      <lpage>7</lpage>
      <abstract>
        <p>This paper describes the approach proposed by the UACH-VisionLab team for the ImageCLEFmedical Concept Detection subtask 2024. The objective of this subtask is to assign medical concepts to images automatically. In particular, 1,945 distinct Clinical Concepts of Unique Identifiers (CUIs) must be associated with medical images representing a multi-label classification (MLC) problem. In this context, the ImageCLEFmedical Concept Detection subtask provides a multi-label dataset in which a medical image may contain multiple descriptive labels. The class imbalance problem in MLC poses a challenge where the samples and their corresponding labels are not uniformly distributed over the dataset. To address this challenge, our approach employs an ensemble of five EficientNet B0 (ENB0) neural architectures. An initial neural network, ENB0, classifies each image into all possible labels. Based on the classification results, we create subgroups of multi-label datasets considering specific CUIs, such as ultrasonography, bone structure of the cranium, angiogram, and lower extremity. A separate ENB0 architecture is trained for each of these subgroups. Finally, the outputs of these five neural architectures are combined to generate the final prediction results. Our proposal ranks 5th place in the ImageCLEFmedical Concept Detection subtask, achieving an F1-score of 0.59. The code to implement our proposal can be found in https://github.com/axelm11/CLEF-ImageCLEF-2024.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Multi-label classification</kwd>
        <kwd>imbalanced data</kwd>
        <kwd>EficientNet</kwd>
        <kwd>ImageCLEFmedical</kwd>
        <kwd>ensemble</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>image with 1,945 possible medical concepts. Given the high imbalance of the dataset, an additional four
ENB0 models were trained to identify specific concepts and improve the performance of our proposal.</p>
      <p>The rest of this paper is organized as follows: Section 2 presents a general description of the
ImageCLEFmedical dataset, Section 3 introduces our approach, and Sections 4 and 5 provide results and
conclusions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Dataset</title>
      <p>
        The multimodal data utilized in the ImageCLEFmedical Lab is derived from the Radiology Object in
Context version 2 (ROCOv2) dataset [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. This dataset consists of radiological images accompanied by
their respective medical concepts and captions. It is comprised of three distinct subsets: the training set,
the validation set, and the test set. The training and validation datasets are accompanied by
commaseparated value (CSV) files, which contain the medical image identifiers and the corresponding Concept
Unique Identifiers (CUIs). The objective of the concept detection task is to automatically assign the
corresponding CUIs to the diferent images of the dataset. Figure 1 shows a visual representation of the
medical concepts associated with the diferent CUIs. In this case, the size of each word is related to
its frequency. Among the most frequently occurring concepts are X-Ray Computed Tomography, Plain
x-ray, Ultrasonography, Magnetic Resonance Imaging, and Chest, to mention some.
      </p>
      <p>The task of assigning the 1,945 possible medical concepts to each image in the ImageCLEFmedical
dataset is highly challenging, given the high level of complexity involved. For instance, images obtained
from the same image modality may describe diferent conditions afecting diferent parts of the body.
This is exemplified in Figure 2, where images corresponding to the same modality, X-Ray Computed
Tomography, show diferent parts of the body emphasizing diferent medical concepts.</p>
      <p>Another case is presented in Figure 3, where diferent image modalities present the same medical
CUI. In this case, an angiogram, plain x-ray, and magnetic resonance imaging are associated with the
CUI heart. Therefore, it is possible that one CUI can be present in diferent image modalities.</p>
      <p>Figure 4 shows an additional challenging scenario where images that appear to be highly similar
may, in fact, have diferent CUIs.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Methods</title>
      <p>
        Our proposal is based on the baseline model provided by the ImageCLEFmedical 2024 organizers, an
EficientNet B0 (ENB0) neural architecture. Our team evaluates diferent neural architecture models, such
as ResNet [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], DenseNet [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], the Vision Transformer (ViT) [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], and Convolutional vision Transformer
(CvT) [8]. However, the one proposed by the organizers yielded the best F1-scores with the validation
set. The results of the ENB0 model indicate that certain CUIs exhibit highly accurate F1 performance
while others exhibit zero performance. This discrepancy is primarily attributed to the multi-label class
imbalance issue inherent in real-world application datasets [9, 10, 11, 12, 13]. Table 1 presents the
top eight best F1-score performances. Based on these results, we select specific CUIs to create four
multi-label subgroups to train and validate separate ENB0 models. The number of support samples and
visual similarities in the images were considered when selecting these CUIs. For example, the categories
bone structure of cranium, lower extremity and angiogram exhibit a comparable number of samples. In
contrast, ultrasonography is a particularly interesting image modality, given the homogeneity of the
images within this subgroup.
      </p>
      <p>Figure 5 shows a block diagram of the proposed approach. First, an initial ENB0 model is trained to
classify all the images of the training dataset on all the possible CUIs of the challenge. The output of this
model is a vector of dimensionality 1,945. Then, four subgroups are defined based on the classification
results of the ultrasonography, bone structure of cranium, lower extremity and angiogram CUIs. If an
image is classified within any of the four aforementioned concepts, it is considered to be part of a
specific subgroup. Once the subgroups have been defined, they are trained with a separate ENB0 model
to identify the possible medical concepts they contain. During training, we consider it appropriate to
eliminate those CUIs with a very high or low-frequency appearance to avoid severe class imbalance
issues.</p>
      <p>For example, the concept plain x-ray is a very common concept. Therefore, it is eliminated from all
the subgroups. For low-frequency concepts, we consider those CUIs with a support set of at least 50
samples and a maximum of 20 concepts to predict for each model.</p>
      <p>Then, the proposed methodology is as follows. If the initial ENB0 identifies that the input medical
image contains a CUI associated with the concepts of ultrasonography, bone structure of cranium, lower
extremity or angiogram, then the ENB0 model trained with the specific subgroup will also analyze this
input image and will produce an output prediction. All possible predictions identified by the second
ENB0 will be included in the initial prediction. In other words, four ENB0 neural architectures are
employed to enhance the outcome of the initial model. To ensure a precise final prediction, it is essential
to exercise caution in determining the location of the CUI, as the output dimensionality of these models
difers. Figure 6 illustrates this procedure. In this example, the angiogram concept is identified, and the
prediction of the model trained with this specific subgroup is utilized to generate the final prediction
result. In this case, the second ENB0 model detects four new concepts included in the final prediction.</p>
      <p>Once we define the four subgroups, we proceed to analyze the relationship between the diferent CUIs
they contain. Figure 7 shows the chord diagram of the angiogram concept. This figure illustrates the
relationship between the CUIs within this subgroup. The nodes represent the diferent concepts, and the
width of the edges is proportional to the relationship between the two nodes. Table 2 provides a more
detailed overview of the diferent concepts within this subgroup and the support set of each of them.
The most frequent concepts are the anterior descending branch of left coronary artery, stent device, right
coronary artery structure and stenosis. As can be observed in Figure 7, the anterior descending branch of
left coronary artery has a strong relationship with stenosis, pulmonary artery structure, and structure of
circumflex branch of left coronary artery . Furthermore, it is noteworthy that the right coronary artery
structure is a frequent medical concept in this subgroup that exhibits a constant relationship with the
majority of other concepts, with the exception of pseudoaneurysm.</p>
      <p>Figure 8 shows the chord diagram of the medical concept bone structure of cranium. Table 3 shows
the specific canonical names of this subgroup and their support set. As can be observed, mandible is the
more common medical concept. It has a strong relationship with permanent premolar tooth, and maxilla
but also, the concepts tooth structure, tooth root structure and structure of wisdom tooth are related to it.
On the contrary, X-Ray Computed Tomography is only slightly related to maxilla and the CUI C1266909
(this CUI does not present a canonical name associated with it).</p>
      <p>Figure 9 and Table 4 show the chord diagram and CUIs, canonical names, and support set of the lower
extremity subgroup. Femur is the most frequent concept with a strong relationship with cerebral cortex,
axis vertebra, and head of femur. We would like to point out that we are not sure if the cerebral cortex
should be the correct canonical name of C0007776. Furthermore, it can be observed that the medical
concepts of bone plates and screw are closely related.</p>
      <p>Ultrasonography is our last subgroup. Figure 10 shows its relationship chord diagram, and Table
5 presents the canonical names and support set of this subgroup. Left ventricular structure and right
ventricular structure are the more common concepts and present a high relationship between them.
Right atrial structure is another common concept, and it can be observed that it is associated with the
concepts left ventricular structure, right ventricular structure and left atrial structure.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>All the neural models were trained on an NVIDIA GeForce RTX 3080 Ti 12GB GPU using the PyTorch
framework and the Adam Optimizer, with an initial learning rate of 1e-3 using a batch size of 64.</p>
      <p>Table 6 shows the results of our team, UACH-VisionLab, with the test partition dataset. These results
were provided by the ImageCLEFmedical Lab 2024 organizers. The F1-score is a measure of the harmonic
mean of precision and recall. A secondary F1-score was calculated using a subset of concepts that was
manually curated. Two runs were submitted by our team. The first run use a drop path rate of 0.2 while
the second a drop path rate of 0.3, with a weight decay factor of 1e-5.</p>
      <p>The results presented in Table 6 demonstrate that the first run achieves a superior performance. The
increase in the drop path rate and the use of the L2 regularization method afect the performance of the
model, reducing its generalization ability with test data.</p>
      <p>In order to gain a deeper understanding of the manner in which the incorporation of the four ENB0
models enhances the performance of our approach, Table 7 presents the results of the precision, recall,
and F1-score metrics on randomly selected CUIs. The first three columns show the results obtained
when only one ENB0 model is employed, defined as the “Base" model. Subsequently, the approach
was further enhanced by incorporating the training of the lower extremity (LE) subgroup defining
the “Base+LE" approach. The “Base+LE+Angio" approach was created by additionally including the
angiogram subgroup. The “Base+LE+Angio+Ultrasono" approach was constructed by combining the LE
and angiogram subgroups with ultrasonography. Finally, the “Base+LE+Angio+Ultrasono+Cranium"
approach integrates the bone structure of cranium subgroup.</p>
      <p>A green highlight in Table 7 indicates a metric improvement, whereas a yellow highlight indicates a
metric decrease. It is important to note that the improvements in the F1-score are mainly related to an
increase in the recall score. The recall metric measures how often a true positive image is identified,
whereas the precision metric considers how many positive predictions are true positive samples.
Consequently, if the model detects only one true positive sample with a specific CUI, the precision
metric will be high. In contrast, the recall metric will exhibit a low performance (as observed, for example,
in the third row of Table 7 where many false negative samples are detected). Consequently, with fewer
false negative detections but more false positives, the precision metric will decrease (highlighted in
yellow), while the recall metric will increase, resulting in an improved F1-score metric (highlighted in
green).</p>
      <p>The improvements in the F1-score metric resulting from the incorporation of the lower extremity
subgroup (Base+LE apporach) are structure of left hip, femur, joint capsule, screw, and head of femur. All
of these medical concepts are considered in the training of this subgroup.</p>
      <p>The improvement in the concepts detection resulting from the incorporation of the angiogram
subgroup (Base+LE+Angio approach) includes the stent device, caudal, structure of circumflex branch of
left coronary artery, collateral branch of vessel, pseudoaneurysm and vessel positions. It should be noted
that all the aforementioned improvements, which had been reported in the previous approach (Base+LE),
are maintained in this one, but only those that are new are highlighted in these three columns. This
same reporting strategy is used in the remaining approaches.</p>
      <p>The training and incorporation of the ultrasonography subgroup results in the
Base+LE+Angio+Ultrasono approach. The concepts that demonstrate an improvement in the
F1-score metric are liver, heart atrium, right atrial structure, aorta, mitral valve, right ventricular structure,
uterus, heart ventricle, thrombus, and pericardial efusion . The medical concept heart atrium was also
slightly modified with the training and incorporation of the bone structure of cranium subgroup.
However, this is the only concept that was modified. No additional improvements could be identified
with the Base+LE+Angio+Ultrasono+Cranium approach.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>This working note paper presents the approach and results of the UACH-VisionLab team on the
ImageCLEFmedical 2024 Concept Detection subtask. An analysis of the results yielded by the baseline
code provided by the organizers reveals a significant imbalance issue in the context of multi-label
classification. Therefore, we consider it appropriate to define subgroups with the aim of reducing this
class imbalance problem. The medical concepts of ultrasonography, bone structure of the cranium, lower
extremity and angiogram are identified as appropriate for use in the construction of these subgroups.
Each subgroup is trained separately, and their results are merged with those produced by an initial
ENB0 neural model.</p>
      <p>Upon examination of the validation results obtained in the various iterations of our experiments, we
observe an increase in the recall metric. This indicates that our approach has reduced the number of
false negative detections, which is the behavior we are looking for in class imbalance datasets. However,
it has also resulted in an increase in the number of false positives, decreasing the precision metric.
The only subgroup that does not produce an improvement in the metric results is the bone structure of
cranium. Further investigation is required in order to gain an understanding of this behavior.</p>
      <p>A chord diagram of the formed subgroups provides a more comprehensive understanding of the
diverse concepts within them and their interconnections. Unfortunately, due to time constraints, we
were unable to incorporate this crucial knowledge into the training of the models. However, we consider
it to be of paramount importance, and we intend to incorporate this information into future approaches.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B.</given-names>
            <surname>Ionescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Drăgulinescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Rückert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Ben</given-names>
            <surname>Abacha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>García Seco de Herrera</surname>
          </string-name>
          , L. Bloch,
          <string-name>
            <given-names>R.</given-names>
            <surname>Brüngel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Idrissi-Yaghir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Schäfer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. S.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. M. G.</given-names>
            <surname>Pakull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Damm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bracke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Friedrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Andrei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Prokopchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Karpenka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Radzhabov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kovalev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Macaire</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schwab</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Lecouteux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Esperança-Rodier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Yim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yetisgen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Hicks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Riegler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Thambawita</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Storås</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Halvorsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Heinrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kiesel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Stein</surname>
          </string-name>
          , Overview of ImageCLEF 2024:
          <article-title>Multimedia retrieval in medical applications, in: Experimental IR Meets Multilinguality</article-title>
          , Multimodality, and
          <string-name>
            <surname>Interaction</surname>
          </string-name>
          ,
          <source>Proceedings of the 15th International Conference of the CLEF Association (CLEF</source>
          <year>2024</year>
          ), Springer Lecture Notes in Computer Science LNCS, Grenoble, France,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Rückert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Ben</given-names>
            <surname>Abacha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. G.</given-names>
            <surname>Seco de Herrera</surname>
          </string-name>
          , L. Bloch,
          <string-name>
            <given-names>R.</given-names>
            <surname>Brüngel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Idrissi-Yaghir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Schäfer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bracke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Damm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. M. G.</given-names>
            <surname>Pakull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. S.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Friedrich</surname>
          </string-name>
          , Overview of ImageCLEFmedical 2024 -
          <article-title>Caption Prediction and Concept Detection</article-title>
          , in: CLEF2024 Working Notes, CEUR Workshop Proceedings, CEUR-WS.org, Grenoble, France,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <article-title>EficientNet: Rethinking Model Scaling for Convolutional Neural Networks</article-title>
          ,
          <source>in: Proceedings of the 36th International Conference on Machine Learning, PMLR</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>6105</fpage>
          -
          <lpage>6114</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Rückert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bloch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Brüngel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Idrissi-Yaghir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Schäfer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. S.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Koitka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Pelka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. B.</given-names>
            <surname>Abacha</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. G. S. de Herrera</surname>
            , H. Müller,
            <given-names>P. A.</given-names>
          </string-name>
          <string-name>
            <surname>Horn</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Nensa</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <article-title>M. Friedrich, ROCOv2: Radiology Objects in COntext version 2, an updated multimodal image dataset, Scientific Data (</article-title>
          <year>2024</year>
          ). URL: https://arxiv.org/abs/2405.10004v1.
          <source>doi:10.1038/s41597-024-03496-6.</source>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>K.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , S. Ren,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>Deep Residual Learning for Image Recognition</article-title>
          ,
          <source>in: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>770</fpage>
          -
          <lpage>778</lpage>
          . doi:
          <volume>10</volume>
          .1109/ CVPR.
          <year>2016</year>
          .
          <volume>90</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>G.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. Van Der</given-names>
            <surname>Maaten</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. Q.</given-names>
            <surname>Weinberger</surname>
          </string-name>
          , Densely Connected Convolutional Networks,
          <source>in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>2261</fpage>
          -
          <lpage>2269</lpage>
          . doi:
          <volume>10</volume>
          .1109/CVPR.
          <year>2017</year>
          .
          <volume>243</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Dosovitskiy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Beyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kolesnikov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Weissenborn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Unterthiner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dehghani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Minderer</surname>
          </string-name>
          , G. Heigold,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gelly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Houlsby</surname>
          </string-name>
          ,
          <article-title>An Image is Worth 16x16 Words:</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>