<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Conference and Labs of the Evaluation Forum, September</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>KDE-med-lab at ImageCLEF 2024: Identify data and detect generative models using CNN by lung segmentation based on U-net.</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tetsuya Asakawa</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kazuki Shimizu</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kei Nomura</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Masaki Aono</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Toyohashi Heart Center</institution>
          ,
          <addr-line>21-1 Gobutori, Oyama-cho, Toyohashi, Aichi, Japan, 441-8530</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Toyohashi University of Technology</institution>
          ,
          <addr-line>1-1 Hibarigaoka, Tempaku-cho, Toyohashi, Aichi, Japan, 441-8580</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>0</volume>
      <fpage>9</fpage>
      <lpage>12</lpage>
      <abstract>
        <p>CLEF 2024 ImageCLEF Gans Task is an example of the challenging research problem in the field of CT image analysis. The purpose of this research is to detect the synthetic biomedical image data to determine which real images were used in training to produce the generated images in the first subtask, and detect generated model in the second subtask. We propose lung images using segmentation based on U-net. And, we employ fine-tuning deep neural network model. In addition, the first subtask is using transfer learning at ImageCLEFmed GANS the training dataset for Task 1 and Task 2. Our submissions (KDE-lab team) on the task test dataset reached accuracy value of about 50.6% in the first subtask and reached ARI (Adjusted Rand Index) of about 0.226 in the second subtask.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Tuberculosis</kwd>
        <kwd>Deep Learning</kwd>
        <kwd>Lung segmentation</kwd>
        <kwd>U-net</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>With the spread of various diseases (e.g., tuberculosis (TB), COVID-19, and influenza), medical research
has been performed to develop and implement the necessary treatments for viruses. However, there is
no method currently available to identify such diseases early. An early diagnosis method is needed to
provide the necessary treatment, develop specific medicines, and prevent the deaths of patients.</p>
      <p>
        Accordingly, a significant amount of efort has been invested in medical image analysis research
in recent years. In fact, a task dedicated to TB has been adopted as part of the ImageCLEF evaluation
campaign for the seven last years[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. In ImageCLEF 2024 the main task [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ],
“ImageCLEFmed GANS,” is treated as a computed tomography (CT) report. The goal of task is to detect
the synthetic biomedical image data to determine which real images were used in training to produce
the generated images in the first subtask, and detect generated model in the second subtask.
      </p>
      <p>In this paper, we propose lung images using segmentation based on U-net. And, we employ fine-tuning
deep neural network model by convolutional neural network (CNN) models or Vision Transformer
(ViT). In addition, the first subtask is using transfer learning at ImageCLEF Gans the training dataset
for Task 1 and Task 2.</p>
      <p>The new contributions of this paper are the proposition of novel feature building techniques by lung
segmentation based on U-net. In Section 2, we describe the conducted task and the ImageCLEFmed
GANS 2024 dataset. In Section 3, we introduce the image experimental settings, and features used
in this study. In Section 4, we describe the experiments we performed. In Section 5, we provide our
conclusions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. ImageCLEFmed GANS 2024 Dataset</title>
      <p>The Gans task of the ImageCLEF 2024 Challenge included partial 2D gray-scale chest CT images[9].
There are two subtasks. We describe as below.
2.1. First subtask-: dentify training data ”fingerprints”
The development dataset comprises data for two diferent generative models organized as follows:
Model 1 (representing the ground truth for the test dataset of the previous edition) consists of 10k
generated images and 200 images annotated as used/not used for training to generate the images.
Specifically, 100 images were utilized for training, while the remaining 100 were not.</p>
      <p>Model 2 consists of 10k generated images and 6k annotated images marked as used/not used for
training to generate the images. Specifically, 3k images were utilized for training, while the remaining3k
were not.</p>
      <p>The test dataset has been structured the two subsets of real images have been mixed, with no disclosed
proportion between unused and used ones. This edition, two generative models will be evaluated to
study the similarity between the real and synthetic data, so two sets of generated and real images are
provided as shown in Fig 1(a).
2.2. Second subtask: Detect generative models ”fingerprints”
The training dataset consists of 600 images generated using three diferent generative models. Each
model is represented by 200 images of size 256x256 and are organized in annotated folders.</p>
      <p>This test task involves working with a dataset comprising 3000 computed tomography (CT) slices,
each sized at 256x256 pixels and grayscale. These slices were generated using four distinct generative
models as shown in Fig 1(b).</p>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed Method</title>
      <p>We propose lung images using segmentation based on U-net. And, we employ fine-tuning deep neural
network model. In addition, the first subtask is using transfer learning at ImageCLEFmed GANS the
training dataset for Task 1 and Task 2. We will detail our proposed system in the following section.
3.1. Lungs image applying mask based on U-net
We noticed that all slices contain relevant information, including bone, space, fat, and skin, in addition
to the lungs that could help classify the samples. We decompressed the files and extracted the mask
only lung based on U-net[10], [11], as shown in Fig 2. In addition, we extracted Lungs image applying
mask.</p>
      <sec id="sec-3-1">
        <title>3.2. Proposed Method the first subtask</title>
        <p>3.2.1. Two-stage transfer learning in the first subtask at Baseline
We propose fine-tuning deep neural network model that uses two-stage transfer learning. The first
stage transfer learning uses ImageCLEF Gans training dataset1. The second stage transfer learning
uses ImageCLEF Gans training dataset2. It shows Fig 3. We don’t use Lungs image applying mask in
Baseline. And, We used to a deep neural network models of Densenet 121 as Baseline. In addition, CNN
feature is passed out on ”K-means” processing to predict the two classes of features.</p>
        <sec id="sec-3-1-1">
          <title>Classifier</title>
          <p>(a)</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>Classifier</title>
          <p>(b)</p>
          <p>Output
l Used
l Not used
3.2.2. Two-stage transfer learning in the first subtask at Propose model
We propose fine-tuning deep neural network model that uses two-stage transfer learning. The first
stage transfer learning uses ImageCLEF Gans training dataset1. The second stage transfer learning
uses ImageCLEF Gans training dataset2. We use Lungs image applying mask using U-net in the Fig 4.
We used to five deep neural network models: Swin Transformer, Densenet 121, Inception-Resnet V2,
EficientNetB03, and Inception V3. In addition, CNN feature is passed out on ”K-means” processing to
predict the two classes of features.</p>
          <p>ImageCLEF GAN Train dataset
ImageCLEF GAN Train dataset 2
ImageCLEF GAN Test dataset</p>
          <p>CNN (DNN)</p>
          <p>+
FC (fully-connected layer)</p>
          <p>Densenet 121
CNN (DNN)</p>
          <p>+
FC (fully-connected layer)</p>
          <p>Densenet 121</p>
          <p>Classifier
(K-means=2)</p>
          <p>2nd stage
Transfer learning</p>
          <p>used
not_used</p>
          <p>1st stage
Transfer learning</p>
          <p>2nd stage
Transfer learning</p>
          <p>used
not_used
3.3. Baseline and Proposed Method the second subtask
We propose fine-tuning deep neural network model. The second subtask is unsupervised learning.
Therefore, we employed K-Means clustering that is the most popular unsupervised learning algorithm
[12]. K-Means clustering is used to find intrinsic groups within the unlabelled dataset and draw
inferences from them.</p>
          <p>In Fig 5, it’s shown that Baseline and Proposed model. Fig 5 (a) is Baseline, and Fig 5 (b) is Proposed</p>
          <p>CNN
(DNN)</p>
          <p>+</p>
          <p>FC
(fully-connected</p>
          <p>layer)
Densenet 121</p>
          <p>CNN
(DNN)</p>
          <p>+</p>
          <p>FC
(fully-connected</p>
          <p>layer)</p>
          <p>ResNet18</p>
          <p>Densenet 121
Inception-Resnet V2</p>
          <p>EfficientNetB03</p>
          <p>Inception V3</p>
        </sec>
        <sec id="sec-3-1-3">
          <title>Classifier (K-means=4)</title>
        </sec>
        <sec id="sec-3-1-4">
          <title>Classifier (K-means=4)</title>
          <p>(a)
(b)
model. We noted that use Lungs image applying mask using U-net at Fig 5 (b) as Proposed model .
We used to five deep neural network models: Swin Transformer, Densenet 121, Inception-Resnet V2,
EficientNetB03, and Inception V3, and used to only Densenet 121 as Baseline. In addition, CNN feature
is passed out on ”K-means clustering” processing to predict the four models.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experiments</title>
      <sec id="sec-4-1">
        <title>4.1. Experimental parameters</title>
        <p>Here, we have determined the following hyper-parameters: the batch size is 256, the optimization
function is stochastic gradient descent with a learning rate of 0.001 and a momentum of 0.9, and the
number of epochs is 50 using early-stopping. For the implementation, we employed Tensorflow[ 13] as
our deep learning framework. For the implementation, we employed Tensorflow as our deep learning
framework. These experiments were performed using PyTorch on Ubuntu 20.04. The workstation has
an Intel Xeon 6242RXeon(20core/3.10GHz/TDP:205W) CPU with 16GB of 6 RAM and an NVIDIA RTX
A6000 GPU.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Submission in the first subtask</title>
        <p>Here, Table 1 shows the results of submission. Finally, we employed Densenet 121 as Baseline. And we
employed Swin Transformer, Densenet 121, Inception-Resnet V2, EficientNetB03, and Inception V3
applying U-net mask images.</p>
        <p>Here, in terms of the score at the total, Swin Transformer+U-net is best score.</p>
        <p>We explain the submission score in more detail. In Test Dataset 1 of Accuracy and Precision, and Test
Dataset 2 of Recall, Densenet 121 (Baseline) is best score. In Test Dataset 2 of Accuracy, Precision, and
Recall, Inception-Resnet V2+U-net is best score. Finally, Test Dataset 1 of Precision and Recall, Swin
Transformer+U-net is best score. This is due to the efect of the shifted window-based self-attention in
the Swin transformer, which calculates features of diferent sizes of regions, from the features of the
entire image to the features of the details.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Submission in the second subtask</title>
        <p>Here, Table 2 shows the results of submission. The evaluation is Score as ARI (Adjusted Rand Index).
Standard clustering methods used Rand Index Finally, we employed Densenet 121 as Baseline. And we
employed ResNet18, Densenet 121, Inception-Resnet V2, EficientNetB03, and Inception V3 applying
U-net mask images. Here, in terms of the score, Densenet 121+U-net is best score.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>In this study, we proposed lung images using segmentation based on U-net by real and generated image
from chest CT images.</p>
      <p>In addition, we could perform fine-tuning deep neural network model. In addition, the first subtask
is using transfer learning at ImageCLEFmed GANS the training dataset for Task 1 and Task 2.</p>
      <p>The experimental results demonstrate that our proposed models out-perform some models in terms
of the accuracy in the first subtask and ARI in the second subtask. Therefore, we believe that using
U-net to pre-process an image is fefective.</p>
      <p>In the future, given an arbitrary X-ray, CT, echo, or magnetic resonance imaging image might be
included the optimal weights for the neural networks. Moreover, we hope our proposed model will
encourage further research into the early detection of diseases (such as TB, COVID-19, and influenza)
or unknown diseases.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>Part of this research was carried out with the support of the Grant for Toyohashi Heart Center Smart
Hospital Joint Research Course and the Grant-in-Aid for Scientific Research (C) (issue numbers 22K12149
and 22K12040).
Proceedings of the 15th International Conference of the CLEF Association (CLEF 2024), Springer
Lecture Notes in Computer Science LNCS, Grenoble, France, 2024.
[9] A. Andrei, A. Radzhabov, D. Karpenka, Y. Prokopchuk, V. Kovalev, B. Ionescu, H. Müller, Overview
of 2024 ImageCLEFmedical GANs Task – Investigating Generative Models’ Impact on Biomedical
Synthetic Images, in: CLEF2024 Working Notes, CEUR Workshop Proceedings, CEUR-WS.org,
Grenoble, France, 2024.
[10] O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image
segmentation (2015). URL: https://arxiv.org/pdf/1505.04597.pdf. arXiv:1505.04597.
[11] C. Liu, M. Pang, Lung ct image segmentation via dilated u-net model and multi-scale
gray correlation-based approach, Circuits, Systems, and Signal Processing 43 (2024) 1697–
1714. URL: https://link.springer.com/content/pdf/10.1007/s00034-023-02532-x.pdf. doi:10.1007/
s00034-023-02532-x.
[12] E. Ahn, A. Kumar, D. Feng, M. Fulham, J. Kim, Unsupervised feature learning with k-means and
an ensemble of deep convolutional neural networks for medical image classification (2019). URL:
https://arxiv.org/pdf/1906.03359.pdf. arXiv:1906.03359.
[13] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean,
M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, R. Jozefowicz, Y. Jia, L. Kaiser,
M. Kudlur, J. Levenberg, D. Mané, M. Schuster, R. Monga, S. Moore, D. Murray, C. Olah, J. Shlens,
B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals,
P. Warden, M. Wattenberg, M. Wicke, Y. Yu, X. Zheng, 2015, TensorFlow, Large-scale machine
learning on heterogeneous systems, URL: https://github.com/tensorflow.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Dicente Cid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kalinovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Liauchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kovalev</surname>
          </string-name>
          , , H. Müller,
          <article-title>Overview of ImageCLEFtuberculosis 2017 - predicting tuberculosis type and drug resistances</article-title>
          ,
          <source>in: CLEF2017 Working Notes, CEUR Workshop Proceedings</source>
          , CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt;</source>
          , Dublin, Ireland,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Dicente Cid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Liauchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kovalev</surname>
          </string-name>
          , , H. Müller, Overview of ImageCLEFtuberculosis 2018 -
          <article-title>detecting multi-drug resistance, classifying tuberculosis type, and assessing severity score</article-title>
          ,
          <source>in: CLEF2018 Working Notes, CEUR Workshop Proceedings</source>
          , CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt;</source>
          , Avignon, France,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Dicente Cid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Liauchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Klimuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tarasau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kovalev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <article-title>Overview of ImageCLEFtuberculosis 2019 - automatic ct-based report generation and tuberculosis severity assessment</article-title>
          ,
          <source>in: CLEF2019 Working Notes, CEUR Workshop Proceedings</source>
          , CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt;</source>
          , Lugano, Switzerland,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kozlovski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Liauchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Dicente Cid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tarasau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kovalev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <article-title>Overview of ImageCLEFtuberculosis 2020 - automatic CT-based report generation</article-title>
          , in: CLEF2020 Working Notes, CEUR Workshop Proceedings, CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt;</source>
          , Thessaloniki, Greece,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kozlovski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Liauchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Dicente Cid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kovalev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <article-title>Overview of imagecleftuberculosis 2021 - ct-based tuberculosis type classification</article-title>
          ,
          <source>in: CLEF2021 Working Notes, CEUR Workshop Proceedings</source>
          , CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt;</source>
          , Bucharest, Romania,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>B.</given-names>
            <surname>Ionescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Péteri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Rückert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Ben</given-names>
            <surname>Abacha</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. G. S. de Herrera</surname>
            ,
            <given-names>C. M.</given-names>
          </string-name>
          <string-name>
            <surname>Friedrich</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Bloch</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Brüngel</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Idrissi-Yaghir</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Schäfer</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Kozlovski</surname>
            ,
            <given-names>Y. D.</given-names>
          </string-name>
          <string-name>
            <surname>Cid</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Kovalev</surname>
          </string-name>
          , L.
          <string-name>
            <surname>-D. Ştefan</surname>
            ,
            <given-names>M. G.</given-names>
          </string-name>
          <string-name>
            <surname>Constantin</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Dogariu</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Popescu</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Deshayes-Chossart</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Schindler</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Chamberlain</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Campello</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Clark</surname>
          </string-name>
          ,
          <article-title>Overview of the ImageCLEF 2022: Multimedia Retrieval in Medical, Social Media and Nature Applications, in: Experimental IR Meets Multilinguality</article-title>
          , Multimodality, and
          <string-name>
            <surname>Interaction</surname>
          </string-name>
          ,
          <source>Proceedings of the 13th International Conference of the CLEF Association (CLEF</source>
          <year>2022</year>
          ),
          <source>LNCS Lecture Notes in Computer Science</source>
          , Springer, Bologna, Italy,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>B.</given-names>
            <surname>Ionescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Drăgulinescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Yim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Ben</given-names>
            <surname>Abacha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Snider</surname>
          </string-name>
          , G. Adams,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yetisgen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Rückert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>García Seco de Herrera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Friedrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bloch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Brüngel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Idrissi-Yaghir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Schäfer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Hicks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Riegler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Thambawita</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Storås</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Halvorsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. J. A. A. A. R. I. C. V. K. A. S. G. I. Nikolaos</given-names>
            <surname>Papachrysos</surname>
          </string-name>
          , Johanna Schöler,
          <string-name>
            <given-names>H.</given-names>
            <surname>Manguinhas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Ştefan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. G.</given-names>
            <surname>Constantin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dogariu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Deshayes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Popescu</surname>
          </string-name>
          , Overview of ImageCLEF 2023:
          <article-title>Multimedia retrieval in medical, socialmedia and recommender systems applications</article-title>
          , in: Experimental IR Meets Multilinguality, Multimodality, and
          <string-name>
            <surname>Interaction</surname>
          </string-name>
          ,
          <source>Proceedings of the 14th International Conference of the CLEF Association (CLEF</source>
          <year>2023</year>
          ), Springer Lecture Notes in Computer Science LNCS, Thessaloniki, Greece,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>B.</given-names>
            <surname>Ionescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Drăgulinescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Yim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Ben</given-names>
            <surname>Abacha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Snider</surname>
          </string-name>
          , G. Adams,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yetisgen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Rückert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>García Seco de Herrera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Friedrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bloch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Brüngel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Idrissi-Yaghir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Schäfer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Hicks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Riegler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Thambawita</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Storås</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Halvorsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. J. A. A. A. R. I. C. V. K. A. S. G. I. Nikolaos</given-names>
            <surname>Papachrysos</surname>
          </string-name>
          , Johanna Schöler,
          <string-name>
            <given-names>H.</given-names>
            <surname>Manguinhas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Ştefan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. G.</given-names>
            <surname>Constantin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dogariu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Deshayes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Popescu</surname>
          </string-name>
          , Overview of ImageCLEF 2024:
          <article-title>Multimedia retrieval in medical applications, in: Experimental IR Meets Multilinguality</article-title>
          , Multimodality, and Interaction,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>