<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Essex at ImageCLEFcaption 2020 task</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alba G. Seco de Herrera</string-name>
          <email>alba.garcia@essex.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francisco Parrilla Andrade</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Luke Bentley</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Arely Aceves Compean</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>School of Computer Science and Electronic Engineering, University of Essex</institution>
          ,
          <addr-line>Colchester</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The University of Essex participated in the fourth edition of the ImageCLEFcaption task which aims to detect concepts on radiology images as an approach to medical image understanding. In this paper, the University of Essex team presents its participation in the ImageCLEF 2020 caption task based on a retrieval based approach for concept detection. A Densely Connected Convolutional Network is used to encode the images. This paper explores compares several modi cation of the baseline considering several aspects such as the image modality or the selection of concepts among the top retrieved images. The University of Essex was third best team participating in the task achieving a 0.381 mean F1 score, very close to the results obtained by the top two teams. Code and pre-trained models are available at https://github.com/fjpa121197/ImageCLEFmedEssex2020.</p>
      </abstract>
      <kwd-group>
        <kwd>ImageCLEF</kwd>
        <kwd>image understanding</kwd>
        <kwd>concept detection</kwd>
        <kwd>medical image retrieval</kwd>
        <kwd>Densely Connected Convolutional Network</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        This paper describes the participation of the School of Computer Science and
Electronic Engineering (CSEE) at the University of Essex at ImageCLEFcaption
2020 task [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. ImageCLEF [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] is an evaluation campaign organised as part of
the CLEF1 initiative labs. The ImageCLEFcaption task aims to interpret and
summarise the insights gained from medical images. The 2020 edition, similar to
2019, focused on concept detection in a large corpus of radiology images. This
task provides tools for radiology image understanding. A detailed description of
the data and the task is presented in Pelka et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        ImageCLEFcaption 2020 task is the forth edition of this successful task. In
previous editions [
        <xref ref-type="bibr" rid="ref3 ref4 ref8">8, 4, 3</xref>
        ] multiple approaches have been explored by the
participants and retrieval approaches achieved best results [
        <xref ref-type="bibr" rid="ref1 ref7">7, 13, 1</xref>
        ]. Following past year
experience, in this paper we proposed a retrieval-based approach where the
images are encoded by a Densely Connected Convolutional Network, DenseNets [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
Several experiments are presented to select the most relevant concepts based on
the concepts associated to the top ranked images retrieved. Code and pre-trained
models are publicly available2.
      </p>
      <p>The rest of the paper is organised as follows. Section 2 presents collection and
the evaluation methodology used in this work. Section 3 explains the techniques
proposed in this paper including a detail description of the runs submitted to
the ImageCLEFcaption task. The results are presented in Section 4. Finally, the
conclusions are given in Section 5.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Collection &amp; evaluation</title>
      <p>
        In this work we used the ImageCLEFmed caption 2020 collection [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. It consists
on three subsets:
{ training set including 64,753 images;
{ validation set including 15,970 images;
{ test set including 3,534 images.
      </p>
      <p>
        The images originate from biomedical journal articles extracted from the
PubMed Central R (PMC)3 repository [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Each image is associated to
multiple Uni ed Medical Language System R (UMLS) Concept Unique Identi ers
(CUIs) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The UMLS CUIs associated to the images in the training and
validation sets were distributed and include 3,047.
      </p>
      <p>
        The UMLS CUIs from the test set were not distributed and, therefore, not
used to build the model. The ImageCLEFcaption task [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] organisers evaluated
the submitted runs computing the F1-scores (see Section 4).
      </p>
      <p>In 2020, the ImageCLEFmed caption collection is classi ed in seven medical
image modalities (Angiography, Computer Tomography, Magnetic Resonance,
Positron Emissions Tomography, Ultrasound, X-ray and combined modalities in
one image).
3</p>
    </sec>
    <sec id="sec-3">
      <title>Methodology</title>
      <p>The proposed approach is based in a content-based image retrieval model, where
DenseNets are used for feature extraction (see Section 3.1). A similarity
comparison is done between the query image and the images in the training and
validation test sets (see Section 3.2). Finally, concept selection is performed to
predict the medical concepts for the query image (see Section 3.3).</p>
      <p>Figure 2 shows an overview of the approach.
2 https://github.com/fjpa121197/ImageCLEFmedEssex2020
3 https://www.ncbi.nlm.nih.gov/pmc/</p>
      <p>
        Image: ROCO2 CLEF 26437
Concepts:
{ C0040398: Tomography, Emission-Computed
{ C0224338: Structure of sternal muscle
{ C0040405: X-Ray Computed Tomography
Following the success of the AUEB NLP Group at ImageCLEFmed Caption
2019 [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], this approach also uses a pre-trained DenseNet model (DenseNet-121) to
encode the images, i.e, to extract their relevant features bases on this model. The
existing DenseNet-121 has many parameters which require immense computing
power and very large scale datasets to be trained from scratch. Hence, transfer
learning is used in this work to mitigate this problem as its power in computer
vision has been extensively study in the literature [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>
        DenseNet models are Convolutional Neural Networks (CNN) models where
each layer is connected directly to other layers [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. DenseNet models have been
recognised for their ability to reach similar performance to ResNet models, which
use double the amount of layers [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. DenseNet-121 has 121 layers with trainable
weights. The model uses the weights from the ImageNet dataset, which consists
of 1.2 million images, and it has 1,000 classes.
      </p>
      <p>The input image is resized to 64 64 and transformed to an array, then a
preprocessing module from DenseNet Keras is used. This module is in charge
of transforming the pixel values into a 0-1 range, and also to normalise the
values based on the ImageNet dataset. The DenseNet-121 model is then used to
encode each image representing it as a vector of 4,096 dimensions excluding the
classi cation layer.</p>
      <p>Fine-tuning. In this work, a ne-tuning strategy is also explored to transfer
learned recognition capabilities to the speci c challenge of concept detection.
The ne-tuning has been done speci cally for each image modality, where a
fully connected layer has been added to the DenseNet-121 model transforming it
into a multi-label classi cation model. The last fully connected layer was trained
for 10 rounds.</p>
      <p>In particular the following parameters were used:
{ Optimizer : RMSProp
{ Learning rate: 0.0001
{ Batch size: 32
{ Momemtum: 0.0</p>
      <p>The model was trained in two phases:
{ 1st phase: Only training the classi cation layers.
{ 2st phase: Training a portion of the feature learning layers and the classi
cation layer.</p>
      <p>Each phase consisted on 10 epochs (each epoch consisted of 100 steps, of which
10 steps were for validation).
3.2 Image retrieval
In this work, the image modality is used to improve the system performance.
Each image in the test set is compared to all the images in the training or
validation sets belonging to the same image modality as the query image. In the
case of run 64104 the images were retrieved from all the training set without
considering the modality.</p>
      <p>In order to perform the comparison, Canberra and Manhattan distances are
computed given the encoded features (see Section 3.1). This metrics were chosen
based on their accuracy and speed of their computational performance. The
10 most similar images to the given query were selected and their associated
concepts extracted. Each of the extracted concept is tagged with a score based
on its ranked position or the computed distance value (see next Section 3.3 for
more details).
3.3</p>
      <sec id="sec-3-1">
        <title>Concepts selection</title>
        <p>In order to assign the concepts to the query images in the test set two
methodologies were tested:
Ranking based selection. Each concept is assigned with a score based on
the ranking of the 10 retrieved image which they were associated to. If the
concept is associated to more than one image, then the value is added to it.
For example, the highest ranked image has all its concepts given a value of 10
and the second highest has all its concepts given a value of 9. If the nal score
(after the addition) given to a concept is equal or over the threshold 20, then
the concept is considered relevant to the query image and assigned to it.
Distance based selection. Each concept is assigned a scored based on the
distance value computed of the 10 retrieved image which they were associated
to. Similar to the ranked based selection, if the concept is associated to more
than one image, then the value is added to it. For each concept nal score
(after the addition), the mean or percentile (99 or 95) is set as a threshold
to select the concept. If the score was equal or over the threshold, then the
concept is considered relevant to the query image and assigned to it. During
the experimental set up other thresholds were tested such as percentiles 75 and
98 or a normalisation process, however there were no nally submitted to the
challenge since mean and percentile 95 and 99 achieved better F1 score on the
validation set.
3.4</p>
      </sec>
      <sec id="sec-3-2">
        <title>Runs</title>
        <p>This section provides a detailed description of the runs submitted to
ImageCLEFcaption 2020 task. The methods used to implement these runs are described in
Section 3. Table 1 summarises the techniques used by each run.
{ Run 64104 - baseline: In this run, DenseNet-121 is used to encode the
images. The top 10 images are retrieved from the training set using Canberra
distance. Ranking based selection is used to select the relevant concepts from
the retrieved images.
{ Run 67416 : This run is similar to the baseline. In this case the image
modality information is used in the retrieval step. The top 10 images from the same
modality as the query image are retrieved from the training set.
{ Run 63804 : This run is similar to the Run 67416. In this cases, the images are
retrieved from both training and validation sets. The modality information
is also considered.
{ Run 64394 : This run is similar to the Run 67416. In this run, ne-tuning is
applied.
{ Run 68019 : This run is similar to the Run 64394. For this run, distance
based selection is used to select the relevant concepts from the retrieved
images using the mean of the scores as a threshold.
{ Run 68026 : This run is similar to the Run 64394. For this run, distance
based selection is used to select the relevant concepts from the retrieved
images setting 95th percentile as a threshold.
{ Run 68025 : This run is similar to the Run 64394. For this run, distance
based selection is used to select the relevant concepts from the retrieved
images setting 98th percentile as a threshold.
{ Run 68022 : This run is similar to the Run 64394. For this run, distance
based selection is used to select the relevant concepts from the retrieved
images setting 99th percentile as a threshold.
{ Run 68022 : This run is similar to the Run 68027 but using the Manhattan
distance in the retrieval step.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Results</title>
      <p>This paper describes the participation of CSEE at the University of Essex at
ImageCLEFcaption 2020 task. CSEE proposes a retrieval-based approach using
a DenseNet-121 model to encode the images in the collection. CSEE compares
di erent modi cations in the baseline to study their e ects on the nal
performance. Best submitted run used ne-tuning per image modality and Canberra
distance in the retrieval step. Concepts were selected based on the top 10 ranked
images. CSEE was the third best team at the benchmark achieving a F1 score of
0.381, very close to the results obtained by the top two teams. In 2020, the image
modality was provided and future improvements can tackle an initial modality
classi cation step as well as training the retrieval step per each modality.
Further work is also needed to better understand the e ects of the concept selection
step.
13. Zhang, Y., Wang, X., Guo, Z., Li, J.: Imagesem at imageclef 2018 caption task:
Image retrieval and transfer learning. In: CLEF2018 Working Notes. CEUR Workshop
Proceedings, CEUR-WS.org &lt;http://ceur-ws.org&gt;, Avignon, France (September
10-14 2018)</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Abacha</surname>
            ,
            <given-names>A.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garc</surname>
            a Seco de Herrera,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gayen</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Demner-Fushman</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Antani</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>Nlm at imageclef 2017 caption task</article-title>
          .
          <source>In: CLEF2017 Working Notes. CEUR Workshop Proceedings</source>
          , CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt;</source>
          , Dublin,
          <source>Ireland (September</source>
          <volume>11</volume>
          -14
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Bodenreider</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          :
          <article-title>The Uni ed Medical Language System (UMLS): integrating biomedical terminology</article-title>
          .
          <source>Nucleic Acids Research</source>
          <volume>32</volume>
          (
          <string-name>
            <surname>Database-Issue</surname>
            <given-names>)</given-names>
          </string-name>
          ,
          <volume>267</volume>
          {
          <fpage>270</fpage>
          (
          <year>2004</year>
          ). https://doi.org/10.1093/nar/gkh061
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Eickho</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schwall</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          , Garc a Seco de Herrera,
          <string-name>
            <surname>A.</surname>
          </string-name>
          , Muller, H.:
          <article-title>Overview of ImageCLEFcaption 2017 - the image caption prediction and concept extraction tasks to understand biomedical images</article-title>
          .
          <source>In: CLEF2017 Working Notes. CEUR Workshop Proceedings</source>
          , CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt;</source>
          , Dublin,
          <source>Ireland (September</source>
          <volume>11</volume>
          -14
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4. Garc a Seco de Herrera,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Eickho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Andrearczyk</surname>
          </string-name>
          ,
          <string-name>
            <surname>V.</surname>
          </string-name>
          , , Muller, H.:
          <article-title>Overview of the ImageCLEF 2018 caption prediction tasks</article-title>
          .
          <source>In: CLEF2018 Working Notes. CEUR Workshop Proceedings</source>
          , CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt;</source>
          , Avignon,
          <source>France (September 10-14</source>
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Van Der Maaten</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weinberger</surname>
            ,
            <given-names>K.Q.</given-names>
          </string-name>
          :
          <article-title>Densely connected convolutional networks</article-title>
          .
          <source>In: Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          . pp.
          <volume>4700</volume>
          {
          <issue>4708</issue>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Ionescu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , Muller, H.,
          <string-name>
            <surname>Peteri</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Abacha</surname>
            ,
            <given-names>A.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Datla</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hasan</surname>
            ,
            <given-names>S.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>DemnerFushman</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kozlovski</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liauchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cid</surname>
            ,
            <given-names>Y.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kovalev</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pelka</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Friedrich</surname>
            ,
            <given-names>C.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>de Herrera</surname>
            ,
            <given-names>A.G.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ninh</surname>
            ,
            <given-names>V.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Le</surname>
            ,
            <given-names>T.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riegler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , l Halvorsen,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Tran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.T.</given-names>
            ,
            <surname>Lux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Gurrin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Dang-Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.T.</given-names>
            ,
            <surname>Chamberlain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Campello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Fichou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Berari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Brie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Dogariu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Stefan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.D.</given-names>
            ,
            <surname>Constantin</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.G.</surname>
          </string-name>
          :
          <article-title>Overview of the ImageCLEF 2020: Multimedia retrieval in lifelogging, medical, nature, and internet applications</article-title>
          .
          <source>In: Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the 11th International Conference of the CLEF Association (CLEF</source>
          <year>2020</year>
          ), vol.
          <volume>12260</volume>
          .
          <source>LNCS Lecture Notes in Computer Science</source>
          , Springer, Thessaloniki,
          <source>Greece (September</source>
          <volume>22</volume>
          - 25
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Kougia</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pavlopoulos</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Androutsopoulos</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>AUEB NLP group at ImageCLEFmed Caption 2019</article-title>
          .
          <source>In: CLEF2019 Working Notes. CEUR Workshop Proceedings</source>
          , vol.
          <volume>2380</volume>
          . CEUR-WS.org, Lugano,
          <source>Switzerland (September</source>
          <volume>09</volume>
          -12
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Pelka</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Friedrich</surname>
            ,
            <given-names>C.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garc</surname>
            a Seco de Herrera,
            <given-names>A.</given-names>
          </string-name>
          , Muller, H.:
          <article-title>Overview of the ImageCLEFmed 2019 concept prediction task</article-title>
          .
          <source>In: CLEF2019 Working Notes. CEUR Workshop Proceedings</source>
          , vol.
          <volume>2380</volume>
          . CEUR-WS.org, Lugano,
          <source>Switzerland (September</source>
          <volume>09</volume>
          -12
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Pelka</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Friedrich</surname>
            ,
            <given-names>C.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garc</surname>
            a Seco de Herrera,
            <given-names>A.</given-names>
          </string-name>
          , Muller, H.:
          <article-title>Medical image understanding: Overview of the ImageCLEFmed 2020 concept prediction task</article-title>
          .
          <source>In: CLEF2020 Working Notes. CEUR Workshop Proceedings</source>
          , vol.
          <volume>12260</volume>
          . CEURWS.org, Thessaloniki,
          <source>Greece (September</source>
          <volume>22</volume>
          -25
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Roberts</surname>
          </string-name>
          , R.J.: PubMed Central:
          <article-title>The GenBank of the published literature</article-title>
          .
          <source>Proceedings of the National Academy of Sciences of the United States of America</source>
          <volume>98</volume>
          (
          <issue>2</issue>
          ),
          <volume>381</volume>
          {382 (jan
          <year>2001</year>
          ). https://doi.org/10.1073/pnas.98.2.
          <fpage>381</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Tan</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zanjani</surname>
            ,
            <given-names>F.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ouyang</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          :
          <article-title>Optimize transfer learning for lung diseases in bronchoscopy using a new concept: sequential ne-tuning</article-title>
          .
          <source>IEEE journal of translational engineering in health and medicine 6</source>
          ,
          <issue>1</issue>
          {
          <issue>8</issue>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Yosinski</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Clune</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bengio</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lipson</surname>
          </string-name>
          , H.:
          <article-title>How transferable are features in deep neural networks?</article-title>
          <source>In: Advances in Neural Information Processing Systems</source>
          . pp.
          <volume>3320</volume>
          {
          <issue>3328</issue>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>