<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Journal of Machine Learning
Research 21 (2020) 1-67.
[32] D. P. Kingma</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1038/s41597-024-03496-6</article-id>
      <title-group>
        <article-title>DS@BioMed at ImageCLEFmedical Caption 2024: Enhanced Attention Mechanisms in Medical Caption Generation through Concept Detection Integration</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Nhi Ngoc-Yen Nguyen</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Huy Le Tu</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Phuong Dieu Nguyen</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tan Nhat Do</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Triet Minh Thai</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Thien B. Nguyen-Tat</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Oxford University Clinical Research Unit</institution>
          ,
          <addr-line>Ho Chi Minh City</addr-line>
          ,
          <country country="VN">Vietnam</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Information Technology</institution>
          ,
          <addr-line>Ho Chi Minh City</addr-line>
          ,
          <country country="VN">Vietnam</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Vietnam National University</institution>
          ,
          <addr-line>Ho Chi Minh City</addr-line>
          ,
          <country country="VN">Vietnam</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <volume>139</volume>
      <fpage>180</fpage>
      <lpage>189</lpage>
      <abstract>
        <p>Purpose: Our study presents an enhanced approach to medical image caption generation by integrating concept detection into attention mechanisms. Method: This method utilizes sophisticated models to identify critical concepts within medical images, which are then refined and incorporated into the caption generation process. Results: Our concept detection task, which employed the Swin-v2 model, achieved an F1 score of 0.58944 on the validation set and 0.61998 on the private test set, securing the third position. For the caption prediction task, our BEiT+BioBart model, enhanced with concept integration and post-processing techniques, attained a BERTScore of 0.60589 on the validation set and 0.5794 on the private test set, placing ninth. Conclusion: These results underscore the eficacy of concept-aware algorithms in generating precise and contextually appropriate medical descriptions. The findings demonstrate that our approach considerably improves the quality of medical image captions, highlighting its potential to enhance medical image interpretation and documentation, thereby contributing to improved healthcare outcomes.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Medical Caption Generation</kwd>
        <kwd>Multimodal Learning</kwd>
        <kwd>Concept Detection</kwd>
        <kwd>ImageCLEF 2024</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The rapid growth of deep learning techniques has profoundly influenced various sectors, notably medical
imaging [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Among these advancements, using neural networks in radiology has garnered considerable
attention due to its potential to enhance diagnostic accuracy and eficiency [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. A particularly intriguing
development in this field is the automatic generation of medical captions from radiology images [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. This
innovation aims to assist radiologists by providing preliminary interpretations and streamlining clinical
documentation. Medical caption generation transforms visual information from radiological images
into coherent, clinically valuable language descriptions. This process is inherently challenging due to
the complexity and diversity of medical images, the need for precise and context-aware descriptions,
and the necessity to incorporate domain-specific knowledge [
        <xref ref-type="bibr" rid="ref3 ref4 ref5">3, 4, 5</xref>
        ].
      </p>
      <p>
        Traditional systems often fall short of these requirements, leading to the development of advanced
attention mechanisms that can more efectively capture and interpret the intricate details found in
radiological images. Recent research shows that integrating concept detection into caption generation
algorithms improves performance [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ]. Concept detection involves identifying and categorizing
critical visual elements in an image, such as anatomical structures, pathological findings, and medical
devices. By incorporating these detected concepts into the caption generation process, models can
produce more accurate and contextually relevant descriptions. One of the advancements in this field is
the ImageCLEF campaign, an annual multimodal machine learning competition established in 2003.
      </p>
      <p>
        ImageCLEF [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] fosters advancements in multimedia processing, including computer vision, image
analysis, classification, and retrieval in multilingual and multimodal contexts. In ImageCLEF 2024 [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ],
participants engaged in the ImageCLEFmedical Caption task [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], which included two subtasks: concept
detection, aiming to identify critical elements within medical images, and caption prediction, focused on
generating descriptive texts based on identified concepts. Concept detection aims to associate biomedical
images with relevant medical concepts, thereby enhancing diagnostic notes by identifying key concepts
that should be included in preliminary reports. Moreover, it facilitates the eficient organization and
retrieval of medical images by indexing them according to related concepts. Caption prediction, or
diagnostic captioning, remains a complex research challenge intended to support the diagnostic process
by providing preliminary reports, rather than replacing physicians. This approach aids experienced
clinicians in managing high volumes of daily medical examinations more swiftly and eficiently, while
also reducing the likelihood of clinical errors among less experienced clinicians.
      </p>
      <p>Our findings underscore that integrating concept detection enhances the eficacy of attention
mechanisms and yields more coherent and diagnostically valuable captions. This research advances the
development of intelligent technologies aimed at supporting radiologists in clinical practice, thereby
elevating the standard of patient care. Section 2 provides a comprehensive review of pertinent literature.
Section 3 outlines our dataset, while Section 4 describes our proposed methodology and presents
experimental results. In Section 5, we discuss the conclusions drawn from our findings and outline
avenues for future research. Our objective is to contribute to the fields of medical imaging and natural
language processing by enhancing the capabilities of medical caption generation, thus paving the way
for further advancements in automated reporting and medical data interpretation.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background and RelatedWorks</title>
      <sec id="sec-2-1">
        <title>2.1. Former Medical Datasets</title>
        <p>
          Medical imaging has been a focal point in the application of deep learning, benefiting from the availability
of comprehensive datasets. Early datasets such as the NIH ChestX-ray14 [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] provided a large collection
of chest radiographs annotated with disease labels, facilitating advancements in image classification
and disease detection tasks. The MIMIC-CXR dataset [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], developed by Johnson et al., further enriched
the field by ofering not only radiographic images but also paired radiology reports, enabling research
in image-to-text generation. These datasets have been pivotal in training and validating deep learning
models, providing the groundwork for more sophisticated tasks such as medical caption generation and
concept detection.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Related Work Concept Detection</title>
        <p>
          Concept detection in medical imaging involves identifying and categorizing essential visual elements
such as anatomical structures, pathological findings, and medical devices. This task is crucial for
generating accurate and contextually relevant medical captions. Early methods primarily relied on
traditional machine learning techniques, which often struggled with the complexity and variability
of medical images (e.g., SVMs (support vector machines), random forests, and k-nearest neighbors).
However, recent advancements in deep learning, particularly CNNs (convolutional neural networks),
have improved the accuracy of concept detection. Notable CNN architectures such as ResNet50 [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]
and EficientNet [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] have demonstrated substantial improvements in detecting and classifying visual
elements in medical images.
        </p>
        <p>
          Recently, Transformer-based models have been increasingly applied to concept detection due to their
ability to capture long-range dependencies and contextual information. Notable examples include ViT
(Vision Transformer) [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], BEiT (Bidirectional Encoder representation from Image Transformers) [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ],
and Swin Transformer [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. These models provide robust feature representations and have shown
promise in enhancing the accuracy and interpretability of medical image analysis.
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Related Work Caption Prediction</title>
        <p>Caption prediction, or diagnostic captioning, involves generating descriptive text that accurately
summarizes the medical content of an image. This task extends beyond simple image annotation,
requiring models to produce coherent and clinically meaningful narratives. Traditional captioning
methods often used template-based approaches, which lacked flexibility and adaptability to diferent
medical contexts. With the advent of deep learning, particularly sequence-to-sequence models and
attention mechanisms, more sophisticated captioning systems have been developed.</p>
        <p>
          For example, Jing et al. proposed a hierarchical LSTM (Long Short-Term Memory) [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] model
combined with a co-attention mechanism to generate detailed radiology reports from medical images.
Their model efectively captured the hierarchical structure of medical reports, producing more detailed
and contextually appropriate captions [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
        </p>
        <p>
          The introduction of Transformer models specifically designed for the medical domain has advanced
the field of medical image captioning. Transformers, particularly models like BioBERT (Bidirectional
Encoder Representations from Transformers for Biomedical Text Mining) [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ], have demonstrated
exceptional capabilities in understanding and generating biomedical text due to their ability to handle
complex medical terminology and contexts. Recent research has leveraged these models to improve
medical captioning. Additionally, LLMs (large language models) such as BioGPT [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ] have been explored
for their potential to generate coherent and diagnostically valuable medical captions, further pushing
the boundaries of automated reporting in radiology.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Task and Dataset Descriptions</title>
      <sec id="sec-3-1">
        <title>3.1. Task Descriptions</title>
        <p>
          ImageCLEF has included medical tasks annually since 2004. Since 2019, it has focused each medical task
on a specific issue but combined them into a single task with multiple subtasks. Four tasks are proposed
for 2024: Image Captioning, Image Question Answering for Colonoscopy Images, MEDIQA-MAGIC,
Quality Control of Synthesized Medical Images Generated by GANs. In ImageCLEF 2024 [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], we engage
in the Image Captioning task [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], simultaneously participating in two subtasks: Concept Detection Task
and Caption Prediction Task, each crucial in the holistic process of generating informative captions for
medical images.
        </p>
        <p>• Concept Detection Task: The Concept Detection Task involves using a refined subset of the
UMLS 2022 AB version for concept generation. This subset is carefully selected to enhance the
accuracy of concept detection by filtering concepts based on their semantic types. Moreover,
to optimize concept detection from images, a stringent exclusion criterion is applied to remove
low-frequency concepts, based on insights from previous iterations.
• Caption Prediction Task: In the Caption Prediction Task, a series of meticulous preprocessing
steps are undertaken to ensure the integrity and coherence of the captioning process. Specifically,
the removal of embedded hyperlinks within captions is performed as a fundamental preprocessing
step. This careful action helps maintain data cleanliness and consistency, thereby supporting
subsequent analytical processes and enabling accurate caption prediction outcomes.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Dataset Information</title>
        <p>
          The data for the captioning task will consist of images selected from medical literature, including
annotations and related UMLS terms manually curated as metadata. For the development dataset,
Radiology Objects in COntext Version 2 (ROCOv2) [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ], an updated and expanded version of the
Radiology Objects in COntext (ROCO) dataset [21], is used for both subtasks. As in previous versions,
this dataset originates from biomedical articles in the PMC OpenAccess collection [22], with the test set
comprising a set of previously unseen images.
        </p>
        <p>• Training Dataset: Includes 70,108 images.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experiments and Results</title>
      <sec id="sec-4-1">
        <title>4.1. The Proposed Approach</title>
        <sec id="sec-4-1-1">
          <title>4.1.1. Concept Detection Methodology</title>
          <p>
            We aim to extract features from images by carefully examining and testing a variety of pretrained
models that fall into three main architectural paradigms, which are shown in Table 1. The list that
follows summarizes the particular models that are being examined:
• CNN-based architectures: Microsoft/ResNet-50 [23], an archetype of conventional
convolutional neural network (CNN) models, characterized by its utilization of residual blocks to mitigate
the challenges associated with gradient vanishing, thereby enhancing model performance within
computationally tractable bounds.
• Transformer-based architectures:
– ViT (Vision Transformer) [
            <xref ref-type="bibr" rid="ref14">14</xref>
            ]: Pioneering the paradigm shift in image data processing, ViT
adopts a transformative approach by encoding images into patch embeddings, followed
by feature extraction using a Transformer encoder, reminiscent of text data processing
methodologies.
– DeiT (Data-eficient Image Transformers) [ 24]: An evolution of ViT, DeiT emphasizes data
eficiency, facilitating training with reduced data volumes while preserving commendable
performance metrics.
– Swin-v2 (Shifted Window Transformer v2) [25]: Distinguished by its innovative utilization
of self-attention mechanisms within shifted windows, Swin-v2 ameliorates computational
complexity and augments performance across a spectrum of tasks, including image
classification and segmentation.
– BEiT (Bidirectional Encoder representation from Image Transformers) [26]: At the
confluence of Transformer and BERT architectures, BEiT excels in capturing robust image features
through bidirectional encoding methodologies.
– BiomedCLIP [27]: A domain-specific adaptation of ViT tailored for biomedical applications,
leveraging the CLIP architecture to enhance performance in medical domain tasks.
• Model Ensembles: In our ensemble framework, we leverage sophisticated fusion techniques to
harness the collective predictive power of multiple models. A key method employed is weighted
averaging, where predictions from each member model are aggregated based on their respective
weights derived from validation performance.
          </p>
          <p>– Ensemble-2 model (Swin-v2 + BEiT): The symbiotic fusion of Swin-v2 and BEiT engenders
a collaborative synergy, capitalizing on the distinctive strengths of each constituent model
to surpass individual model performances.
– Ensemble-4 model (Swin-v2 + BEiT + DeiT + ViT): Comprising a composite quartet of models,
this ensemble fortifies accuracy and generalization capabilities through the combination of
representatives from Transformer-based models.</p>
          <p>Following the feature extraction step, the retrieved features pass via a linear layer and classifier, where
they are transformed and classified to provide outputs that correspond to the chosen class categories.
This key step emphasizes the thorough orchestration of feature transformation and classification to
produce predictions specific to the required class taxonomy.
Concept Filtering A certain process must be followed while using the BEiT (Bidirectional Encoder
Representations from Image Transformers) model in order to carry out idea filtering and modify the
output threshold to detect variations in the outcomes. The following are the steps to follow: On a given
dataset, do inference using the BEiT model and modify the output threshold to filter the ideas or classes.
Setting various threshold values and watching the ensuing outcomes allows for this modification. We
may adjust and assess how diferent thresholds afect the model’s performance using this procedure.</p>
        </sec>
        <sec id="sec-4-1-2">
          <title>4.1.2. Captioning Methodology</title>
          <p>Given the primary focus on Image Captioning in this research, the architectural design must efectively
extract salient features from both the image and its corresponding text, combining them to generate the
ifnal caption. Our carefully curated multimodal fusion architecture incorporates essential components
like an image encoder for pertinent feature extraction, a text encoder for eliciting semantic information
from text, and a decoder to synthesize insights from the textual context. Additionally, the fusion
mechanism integrates image features and output classifications from concept detection, synergistically
blending them with textual input to decode and generate the caption output. The proposed approach
leverages the pretrained Bidirectional Encoder Representations from Transformers (BEiT) model for
image feature extraction. Boasting a symmetric Transformer architecture, BEiT can comprehend image
representational features by concurrently considering both surrounding image patches and global
context. With its extensive training on copious data, BEiT can be fine-tuned and achieve state-of-the-art
results across several computer vision and image processing benchmarks.</p>
          <p>To encode the input text captions, this research employs two domain-specific language models:
BioBART (Bidirectional and Auto-Regressive Transformers for Biomedical Text) and ClinicalT5
(Textto-Text Transfer Transformer fine-tuned on clinical data).</p>
          <p>• BioBART [28] is a version of the BART model [29] adapted and further pre-trained on biomedical
text data such as medical literature, case reports, and genomic analysis documents. Leveraging
its bidirectional Transformer architecture, BioBART can efectively encode both general and
biomedical domain-specific text, enabling the extraction of rich semantic representations for
tasks like text summarization, medical question-answering, and report generation.
• ClinicalT5 [30] is the T5 model [31] additionally fine-tuned on clinical text data including patient
records and consultation reports. Harnessing its text-to-text transfer learning capability for
multi-task modeling, ClinicalT5 can be applied to various natural language processing tasks
in the healthcare domain, such as treatment classification, medical information extraction, and
summarization of patient records.</p>
          <p>For the process of encoding text concepts, we utilize the output from the BEiT model, which is
specifically trained for the concept detection task. During this process, we apply a threshold of 0.5
to selectively retain predictions with a confidence score higher than 0.5, while discarding predictions
with lower confidence scores. This discriminative process aids in capturing the semantic essence of the
detected concepts, thereby facilitating their seamless integration into the multimodal fusion architecture
for further processing and analysis.</p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Experimental Settings</title>
        <p>Several experiments have been conducted to assess the eficacy of the proposed methodologies in
addressing the ImageCLEFmedical Caption 2024 challenge. Specifically, each pre-trained vision model
has been instantiated and evaluated, as detailed in Table 2, which ofers a comprehensive overview of
the pre-trained models employed in this study, encompassing their respective vision model designations,
versions, and parameter counts for each fusion model. These experiments serve to elucidate both the
potential and limitations inherent in each model with regard to the Image Captioning task, thereby
facilitating the selection of the optimal approach for generating final predictions on the private test
dataset of the competition.</p>
        <p>• Concept Detection Task: For the concept detection subtask, the optimization criterion utilized
during training is the AdamW optimizer [32]. The models are trained for 5 epochs with a batch
size of 30 and an initial learning rate of 5e-5. During training, the BCEWithLogitsLoss function,
which combines a Sigmoid layer and BCELoss, is applied, and a threshold value ranging from 0.45
to 0.5 is predominantly used to process the model’s output. To ensure meaningful comparison
results, consistent hyperparameters are maintained across all experiments.
• Caption Prediction Task: During the training process for the caption prediction task, the
CrossEntropyLoss criterion is applied with the ignore_index parameter set to the pad token
index of the tokenizer. This setup helps mitigate the influence of pad tokens on loss
computation, ensuring more precise training outcomes. For optimization, the AdamW optimizer is
utilized with a learning rate of 1e-4 and a weight decay rate of 0.01, chosen to balance training
eficiency and model generalization [ 32]. To leverage the benefits of Mixed Precision Training
[33], the Gradient scaler is integrated into the training pipeline. This scaler adjusts the gradient
scale, enhancing training eficiency and convergence speed of the models. Additionally, the
LinearScheduleWithWarmup is employed to adjust the learning rate over time during training.
This scheduling mechanism requires pre-defining the number of warmup steps and total training
steps to optimize the learning rate schedule efectively. During each training iteration, a batch
size of 16 is utilized. Overall, these training configurations and optimizations contribute to the
performance and stability of the training process, leading to superior model performance.
The hardware utilized for computation included both NVIDIA Tesla T4 and NVIDIA Tesla P100 GPUs.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Evaluation Methodology</title>
        <p>Our evaluation consists of two tasks: Concept Detection and Caption Prediction. Each task uses specific
metrics to measure performance.</p>
        <p>• Concept Detection Task: We assess the performance of concept identification using Accuracy,
Precision, Recall, and F1 score. These metrics measure overall correctness, positive prediction
accuracy, relevant concept capture, and balanced precision and recall, respectively [34].
• Caption Prediction Task: We evaluate the quality and coherence of generated captions
using BERTScore (Bidirectional Encoder Representations from Transformers Score) [35], BLEU
(Bilingual Evaluation Understudy, 1-4) [36], ROUGE (Recall-Oriented Understudy for Gisting
Evaluation) [37], and METEOR (Metric for Evaluation of Translation with Explicit ORdering)
[38]. These metrics assess semantic similarity, fluency, relevance, coherence, informativeness,
and lexical/syntactic aspects.</p>
        <p>Using this diverse set of metrics, we ensure a comprehensive understanding of the model’s
performance and facilitate informed decision-making for further refinement.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Experimental Results</title>
        <p>As detailed in Table 2, the comparative evaluation of various concept detection models on the
development validation set yields valuable insights into their performance across diverse evaluation metrics.
Among these models, Swin-v2 emerges as the frontrunner, exhibiting the highest accuracy (0.16366),
recall (0.47114), and F1 score (0.58944). This underscores Swin-v2’s efectiveness in not only accurately
identifying pertinent instances but also striking a harmonious balance between precision and recall,
rendering it well-suited for concept detection endeavors. Ensemble methodologies, which predictions
from multiple models, demonstrate promising outcomes as well. Notably, the Ensemble-2 model
showcases commendable precision (0.94501) and a noteworthy F1 score (0.58581), suggesting that leveraging
diverse models can augment predictive eficacy, particularly in precision-oriented tasks. While the
Ensemble-4 model marginally surpasses Ensemble-2 in precision (0.94508), it exhibits a slightly lower
F1 score (0.58460), implying a subtle trade-of in recall when employing additional models.</p>
        <p>BEiT-L and BiomedCLIP also manifest robust performance metrics. BEiT-L achieves an accuracy of
0.16145 and an F1 score of 0.58418, while BiomedCLIP demonstrates balanced performance with an
accuracy of 0.15975 and an F1 score of 0.58319. These findings underscore the eficacy of these models
in maintaining high precision and achieving a favorable balance with recall.</p>
        <p>Other models such as BEiT-B, DeiT-B, and ViT-B exhibit commendable performance, albeit slightly
trailing the top performers. For instance, BEiT-B records an accuracy of 0.15554 and an F1 score of
0.57662, indicating respectable yet not leading-edge performance. Similarly, DeiT-B and ViT-B attain
comparable results, with DeiT-B registering an accuracy of 0.15674 and an F1 score of 0.57641, and
ViT-B yielding an accuracy of 0.15413 and an F1 score of 0.57439. Conversely, ResNet-50 demonstrates
notably inferior performance across all metrics, with an accuracy of 0.11412 and an F1 score of 0.51566.
This underscores its relatively limited eficacy in the concept detection task.</p>
        <p>In summation, the Swin-v2 model emerges as the most dependable choice for concept detection owing
to its superior accuracy, recall, and F1 score. Ensemble methodologies, particularly Ensemble-2, exhibit
robust performance, underscoring the advantages of model amalgamation. BEiT-L and BiomedCLIP
ofer balanced performance, rendering them viable alternatives. Meanwhile, ResNet-50’s diminished
performance suggests its lesser suitability for this specific task, underscoring the strides made by newer
architectural advancements.</p>
        <p>As detailed in Table 3, the comparative analysis of various model configurations on the validation set
reveals insights into the eficacy of incorporating concepts and post-processing techniques in caption
generation tasks. The models evaluated include BEiT+BioBart and BEiT+Clinical-T5, with configurations
either incorporating concepts derived from the Concept Detection subtask or excluding them, and
applying post-processing to mitigate repetition in the output captions. The results indicate that for the
BEiT+BioBart model, the inclusion of concepts and the application of post-processing do not result
in any variation in performance across all evaluated metrics, including BERTScore, BLEU (from 1 to
4), ROUGE, and METEOR. This suggests that for BEiT+BioBart, the post-processing step does not
impact the model’s ability to generate captions when concepts are included, maintaining consistent
performance.</p>
        <p>In contrast, the BEiT+Clinical-T5 model demonstrates a more nuanced response to the incorporation
of concepts and post-processing. When concepts are included without post-processing, there is a
slight decline in BERTScore compared to the configuration without concepts. However, BLEU, ROUGE,
and METEOR scores show an improvement with the inclusion of concepts, highlighting the potential
benefits of concept integration in enhancing the model’s performance in these specific metrics. Notably,
when post-processing is applied, the BEiT+Clinical-T5 model exhibits substantial improvements across
all metrics, irrespective of the presence of concepts. This improvement underscores the critical role of
post-processing in refining output quality, with the highest METEOR score observed in the configuration
without concepts but with post-processing. Comparing the two models, BEiT+Clinical-T5 generally
outperforms BEiT+BioBart in BLEU, ROUGE, and METEOR scores. This superior performance is
particularly evident when post-processing is applied, suggesting that BEiT+Clinical-T5 is more responsive to
post-processing enhancements. However, BEiT+BioBart achieves a higher BERTScore when concepts
are included, indicating a potential strength in semantic similarity measures.</p>
        <p>In conclusion, the analysis underscores the importance of model selection, the strategic inclusion
of concepts, and the application of post-processing in optimizing caption generation performance.
BEiT+Clinical-T5 emerges as a more robust model with gains from post-processing, while BEiT+BioBart
maintains consistent performance with concept inclusion. These findings provide valuable insights
for future research and development in automated caption generation systems, emphasizing tailored
approaches for diferent model architectures.</p>
        <p>As detailed in Table 4, the performance evaluation of diferent models on the validation and private
test sets provides a comprehensive understanding of their efectiveness across various configurations and
datasets. For concept detection, three configurations were assessed: Concept BEiT-B with a threshold of
0.45, Detection BEiT-B with a threshold of 0.5, and Swin-v2 with a threshold of 0.5. The results reveal
that the Swin-v2 model performs the best, achieving scores of 0.58944 on the validation set and 0.61998
on the private test set, suggesting superior capability in accurately detecting concepts compared to the
BEiT-B models. The Concept BEiT-B model with a threshold of 0.45 also shows strong performance,
though slightly lower than Swin-v2, indicating the threshold setting’s impact on model eficacy.</p>
        <p>For caption prediction, four configurations were evaluated: BEiT+Clinical-T5 without concepts and
without post-processing, BEiT+Clinical-T5 with concepts and without post-processing,
BEiT+ClinicalT5 with concepts and with post-processing, and BEiT+BioBart with concepts and with post-processing.
The BEiT+Clinical-T5 model without concepts and post-processing scored 0.46001 on the validation set
and 0.4433 on the private test set, while adding concepts slightly improved the private test set score to
0.4453. However, the most considerable performance boost was observed when post-processing was
applied to the BEiT+Clinical-T5 model with concepts, raising the scores to 0.57597 on the validation set
and 0.558 on the private test set. This highlights the substantial role of post-processing in enhancing
model performance.</p>
        <p>Moreover, the BEiT+BioBart model with concepts and post-processing achieved the highest scores
among all configurations, with 0.60589 on the validation set and 0.5794 on the private test set. This
underscores the efectiveness of combining concepts with post-processing in the BioBart architecture,
suggesting that such integration can improve caption generation quality. Overall, the analysis
emphasizes the critical influence of model configuration, the integration of concepts, and the application of
post-processing on the performance outcomes. The superior performance of the Swin-v2 model for
concept detection and the BEiT+BioBart model for caption prediction indicates that diferent models
may excel in specific sub-tasks, advocating for a nuanced approach in model selection and optimization
based on the task requirements and dataset characteristics.</p>
      </sec>
      <sec id="sec-4-5">
        <title>4.5. Error Analysis</title>
        <p>As detailed in Table 5, when employing the BEiT model in conjunction with ClinicalT5 for medical
image analysis, several notable errors have been observed across various dimensions. These errors
include incorrect identification of regions or image types, omissions in providing specific details, and
inaccuracies in context, thereby impacting the overall reliability of the model’s results. The model
occasionally encounters dificulties in accurately identifying regions of interest within the images.</p>
        <p>For instance, it might misinterpret an anteroposterior X-ray of the pelvis as indicating bilateral tibial
fractures. Similarly, it might incorrectly classify a cross-sectional, contrast-enhanced CT scan of the
larynx as a left renal tumor.</p>
        <p>Omissions in providing specific details have become evident in the model’s predictions. The model
often fails to provide the complex details necessary for comprehensive clinical interpretation. For
example, it may overlook critical features such as the eccentric position of a metallic head in an
Xray or the presence of stratified bile in a CT scan. Moreover, contextual inaccuracies are common,
leading to misleading or entirely incorrect descriptions. The model sometimes struggles to grasp the
broader context of medical images, resulting in descriptions that do not align appropriately with the
actual content of the images. Similarly, when utilizing the BEiT model in combination with BioBART,
analogous errors have been observed across various aspects. These include incorrect identification of
regions or image types, omissions in providing specific details, and contextual inaccuracies. Comparing
BEiT with ClinicalT5 and BEiT with BioBART, although both models exhibit similar error patterns, there
are minor diferences in their performance. BEiT combined with ClinicalT5 demonstrates slightly better
performance in certain aspects, such as providing more accurate descriptions and better contextual
understanding. Conversely, BEiT combined with BioBART shows a slight advantage in specific scenarios,
particularly in identifying anatomical structures or image types. However, both models have room for
improvement, highlighting ongoing challenges in developing robust and reliable automated methods
for medical image analysis. In both models, conceptual errors frequently occur, indicating a mismatch
between the predicted concept and the actual content of the medical images. These errors underscore
the challenges in accurately interpreting and classifying medical images based on their content.</p>
        <p>To enhance the accuracy of medical image analysis models, a range of strategies must be employed
to improve data quality, model architecture, and training processes. Firstly, the use of high-quality,
well-annotated datasets is crucial. Combining this with data augmentation techniques such as rotation,
zooming, flipping, and color adjustment can help increase the size and diversity of the training dataset,
thereby enhancing the model’s generalization capabilities. In terms of model architecture, employing
models pre-trained on domain-specific datasets or state-of-the-art (SOTA) models that achieve superior
results is essential. Furthermore, incorporating additional feature extraction from image data, such
as bounding-boxes, segmentation, or advanced features, can help the model better understand the
structure and context of the images. Finally, regularly testing and re-evaluating the model using diverse
datasets will help in early detection of errors and timely adjustment of the model, ensuring the reliability
and accuracy of medical image analysis results.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion and Future Works</title>
      <p>In this study, an enhanced approach to medical caption generation was introduced by integrating
concept detection into attention mechanisms. The method improved performance metrics, with the
Swin-v2 model achieving an F1 score of 0.58944 on the validation set and 0.61998 on the private test set,
earning 3rd place in concept detection. For caption prediction, the BEiT+BioBart model, augmented
with concept integration and post-processing, achieved a BERTScore of 0.60589 on the validation set
and 0.5794 on the private test set, securing 9th place. These results underscore the efectiveness of
concept-aware systems in generating precise and contextually relevant medical descriptions.</p>
      <p>
        Future work will focus on enhancing model performance through several avenues unrelated to
data expansion. First, optimizing model architectures and training protocols can further improve
accuracy and eficiency. Second, incorporating more advanced attention mechanisms and fine-tuning
hyperparameters may yield better contextual understanding and caption quality. Third, integrating
explainability techniques will ensure that model predictions are interpretable and trustworthy for
healthcare professionals. Additionally, exploring transfer learning and domain adaptation techniques
could enhance model performance across various medical imaging modalities. Furthermore, leveraging
large language models (LLMs) such as GPT-3 and BioGPT for their potential to generate coherent
and diagnostically valuable medical captions will be explored [39] [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. Finally, developing robust
post-processing algorithms to further refine generated captions, ensuring they meet clinical standards,
is planned. These eforts aim to advance the capabilities of medical image analysis and automated
reporting systems, contributing to more sophisticated and reliable tools for the healthcare industry.
      </p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgment References</title>
      <p>This research is funded by University of Information Technology-Vietnam National University
HoChiMinh City under grant number D4-2024-01.
Philadelphia, Pennsylvania, USA, 2002, pp. 311–318. URL: https://aclanthology.org/P02-1040.
doi:10.3115/1073083.1073135.
[37] C.-Y. Lin, ROUGE: A package for automatic evaluation of summaries, in: Text Summarization
Branches Out, Association for Computational Linguistics, Barcelona, Spain, 2004, pp. 74–81. URL:
https://aclanthology.org/W04-1013.
[38] S. Banerjee, A. Lavie, METEOR: An automatic metric for MT evaluation with improved correlation
with human judgments, in: J. Goldstein, A. Lavie, C.-Y. Lin, C. Voss (Eds.), Proceedings of the
ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or
Summarization, Association for Computational Linguistics, Ann Arbor, Michigan, 2005, pp. 65–72.</p>
      <p>URL: https://aclanthology.org/W05-0909.
[39] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam,
G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh,
D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark,
C. Berner, S. McCandlish, A. Radford, I. Sutskever, D. Amodei, Language models are few-shot
learners, in: Proceedings of the 34th International Conference on Neural Information Processing
Systems, NIPS ’20, Curran Associates Inc., Red Hook, NY, USA, 2020.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G.</given-names>
            <surname>Litjens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kooi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. E.</given-names>
            <surname>Bejnordi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A. A.</given-names>
            <surname>Setio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ciompi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ghafoorian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A. van der</given-names>
            <surname>Laak</surname>
          </string-name>
          , B. van
          <string-name>
            <surname>Ginneken</surname>
            ,
            <given-names>C. I. SÃąnchez</given-names>
          </string-name>
          ,
          <article-title>A survey on deep learning in medical image analysis</article-title>
          ,
          <source>Medical Image Analysis</source>
          <volume>42</volume>
          (
          <year>2017</year>
          )
          <fpage>60</fpage>
          -
          <lpage>88</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/ S1361841517301135. doi:https://doi.org/10.1016/j.media.
          <year>2017</year>
          .
          <volume>07</volume>
          .005.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Esteva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kuprel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Novoa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Swetter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. M.</given-names>
            <surname>Blau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Thrun</surname>
          </string-name>
          ,
          <article-title>Dermatologist-level classification of skin cancer with deep neural networks</article-title>
          ,
          <source>Nature</source>
          <volume>542</volume>
          (
          <year>2017</year>
          )
          <fpage>115</fpage>
          -
          <lpage>118</lpage>
          . doi:
          <volume>10</volume>
          .1038/ nature21056.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>B.</given-names>
            <surname>Jing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Xie</surname>
          </string-name>
          , E. Xing,
          <article-title>On the automatic generation of medical imaging reports</article-title>
          , in: I. Gurevych, Y. Miyao (Eds.),
          <source>Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume</source>
          <volume>1</volume>
          :
          <string-name>
            <surname>Long</surname>
            <given-names>Papers)</given-names>
          </string-name>
          ,
          <source>Association for Computational Linguistics</source>
          , Melbourne, Australia,
          <year>2018</year>
          , pp.
          <fpage>2577</fpage>
          -
          <lpage>2586</lpage>
          . URL: https://aclanthology.org/P18-1240. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>P18</fpage>
          -1240.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>C. Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. P.</given-names>
            <surname>Xing</surname>
          </string-name>
          ,
          <article-title>Knowledge-driven encode, retrieve, paraphrase for medical image report generation</article-title>
          ,
          <source>in: Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence</source>
          , AAAI'19/IAAI'19/EAAI'19, AAAI Press,
          <year>2019</year>
          . URL: https://doi.org/10.1609/aaai.v33i01.33016666. doi:
          <volume>10</volume>
          .1609/aaai.v33i01.
          <fpage>33016666</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>H.-C.</given-names>
            <surname>Shin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Roberts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Demner-Fushman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Summers</surname>
          </string-name>
          ,
          <article-title>Learning to read chest x-rays: Recurrent neural cascade model for automated image annotation</article-title>
          ,
          <source>in: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>2497</fpage>
          -
          <lpage>2506</lpage>
          . doi:
          <volume>10</volume>
          .1109/CVPR.
          <year>2016</year>
          .
          <volume>274</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>B.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zou</surname>
          </string-name>
          ,
          <article-title>Concept-aware video captioning: Describing videos with efective prior information</article-title>
          ,
          <source>IEEE Transactions on Image Processing</source>
          <volume>32</volume>
          (
          <year>2023</year>
          )
          <fpage>5366</fpage>
          -
          <lpage>5378</lpage>
          . doi:
          <volume>10</volume>
          .1109/TIP.
          <year>2023</year>
          .
          <volume>3307969</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <article-title>Improving image captioning via predicting structured concepts</article-title>
          , in: H.
          <string-name>
            <surname>Bouamor</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Pino</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          Bali (Eds.),
          <source>Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing</source>
          , Association for Computational Linguistics, Singapore,
          <year>2023</year>
          , pp.
          <fpage>360</fpage>
          -
          <lpage>370</lpage>
          . URL: https://aclanthology.org/
          <year>2023</year>
          .emnlp-main.
          <volume>25</volume>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2023</year>
          . emnlp-main.
          <volume>25</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>B.</given-names>
            <surname>Ionescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Drăgulinescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Rückert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Ben</given-names>
            <surname>Abacha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>García Seco de Herrera</surname>
          </string-name>
          , L. Bloch,
          <string-name>
            <given-names>R.</given-names>
            <surname>Brüngel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Idrissi-Yaghir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Schäfer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. S.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. M.</given-names>
            <surname>Pakull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Damm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bracke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Friedrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Andrei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Prokopchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Karpenka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Radzhabov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kovalev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Macaire</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schwab</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Lecouteux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Esperança-Rodier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Yim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yetisgen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Hicks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Riegler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Thambawita</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Storås</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Halvorsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Heinrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kiesel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Stein</surname>
          </string-name>
          , Overview of ImageCLEF 2024:
          <article-title>Multimedia retrieval in medical applications, in: Experimental IR Meets Multilinguality</article-title>
          , Multimodality, and
          <string-name>
            <surname>Interaction</surname>
          </string-name>
          ,
          <source>Proceedings of the 15th International Conference of the CLEF Association (CLEF</source>
          <year>2024</year>
          ), Springer Lecture Notes in Computer Science LNCS, Grenoble, France,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Rückert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Ben</given-names>
            <surname>Abacha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. G.</given-names>
            <surname>Seco de Herrera</surname>
          </string-name>
          , L. Bloch,
          <string-name>
            <given-names>R.</given-names>
            <surname>Brüngel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Idrissi-Yaghir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Schäfer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bracke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Damm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. M. G.</given-names>
            <surname>Pakull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. S.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Friedrich</surname>
          </string-name>
          , Overview of ImageCLEFmedical 2024 -
          <article-title>Caption Prediction and Concept Detection</article-title>
          , in: CLEF2024 Working Notes, CEUR Workshop Proceedings, CEUR-WS.org, Grenoble, France,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bagheri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Summers</surname>
          </string-name>
          , Chestx-ray8:
          <article-title>Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases</article-title>
          ,
          <source>in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>3462</fpage>
          -
          <lpage>3471</lpage>
          . doi:
          <volume>10</volume>
          .1109/CVPR.
          <year>2017</year>
          .
          <volume>369</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A. L.</given-names>
            <surname>Goldberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Amaral</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Glass</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Hausdorf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. C.</given-names>
            <surname>Ivanov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. G.</given-names>
            <surname>Mark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. E.</given-names>
            <surname>Mietus</surname>
          </string-name>
          , G. B. Moody, C.
          <article-title>-</article-title>
          K. Peng,
          <string-name>
            <given-names>H. E.</given-names>
            <surname>Stanley</surname>
          </string-name>
          , Physiobank, physiotoolkit, and
          <article-title>physionet: components of a new research resource for complex physiologic signals</article-title>
          ,
          <source>Circulation</source>
          <volume>101</volume>
          (
          <year>2000</year>
          )
          <fpage>E215</fpage>
          -
          <lpage>E220</lpage>
          . doi:
          <volume>10</volume>
          .1161/01.CIR.
          <volume>101</volume>
          .23.
          <year>e215</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>K.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , S. Ren,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>Deep residual learning for image recognition</article-title>
          ,
          <source>in: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>770</fpage>
          -
          <lpage>778</lpage>
          . doi:
          <volume>10</volume>
          .1109/ CVPR.
          <year>2016</year>
          .
          <volume>90</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <article-title>EficientNet: Rethinking model scaling for convolutional neural networks</article-title>
          ,
          <source>in: ICML</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>6105</fpage>
          -
          <lpage>6114</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Dosovitskiy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Beyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kolesnikov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Weissenborn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Unterthiner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dehghani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Minderer</surname>
          </string-name>
          , G. Heigold,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gelly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Houlsby</surname>
          </string-name>
          ,
          <article-title>An image is worth 16x16 words: Transformers for image recognition at scale</article-title>
          ,
          <source>International Conference on Learning Representations abs/2010</source>
          .11929 (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>H.</given-names>
            <surname>Bao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <article-title>BEit: BERT pre-training of image transformers</article-title>
          ,
          <source>International Conference on Learning Representations abs/2106</source>
          .08254 (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <article-title>Swin transformer: Hierarchical vision transformer using shifted windows</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF international conference on computer vision</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>10012</fpage>
          -
          <lpage>10022</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hochreiter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schmidhuber</surname>
          </string-name>
          ,
          <article-title>Long short-term memory</article-title>
          ,
          <source>Neural Comput. 9</source>
          (
          <year>1997</year>
          )
          <article-title>1735âĂŞ1780</article-title>
          . doi:
          <volume>10</volume>
          .1162/neco.
          <year>1997</year>
          .
          <volume>9</volume>
          .8.1735.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>J.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Yoon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. H.</given-names>
            <surname>So</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Kang,</surname>
          </string-name>
          <article-title>BioBERT: a pre-trained biomedical language representation model for biomedical text mining</article-title>
          ,
          <source>Bioinformatics</source>
          <volume>36</volume>
          (
          <year>2020</year>
          )
          <fpage>1234</fpage>
          -
          <lpage>1240</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>R.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Sun</surname>
          </string-name>
          , Y. Cheng, Y. Zhang,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          , et al.,
          <article-title>BioGPT: generative pre-trained transformer for biomedical text generation and mining</article-title>
          , Briefings in Bioinformatics (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>J.</given-names>
            <surname>Rückert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bloch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Brüngel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Idrissi-Yaghir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Schäfer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. S.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Koitka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Pelka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. B.</given-names>
            <surname>Abacha</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. G. S. de Herrera</surname>
            , H. Müller,
            <given-names>P. A.</given-names>
          </string-name>
          <string-name>
            <surname>Horn</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Nensa</surname>
            ,
            <given-names>C. M.</given-names>
          </string-name>
          <string-name>
            <surname>Friedrich</surname>
          </string-name>
          , ROCOv2: Radiology
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>