<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>H. Zuo);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Evaluating of the Privacy of Images Generated by ImageCLEFmedical GAN 2025 Using Similarity Classification Method Based on Image Enhancement and Deep Learning Model</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Haojie Zuo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Xiaobing Zhou</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>School of Information Science and Engineering, Yunnan University</institution>
          ,
          <addr-line>Kunming 650504, Yunnan</addr-line>
          ,
          <country country="CN">China</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>The ImageCLEFmed GAN 2025 task aims to detect whether the synthesized medical images contain "fingerprints" from the training data. In this paper, we adopted a similarity classification method based on image enhancement and deep learning models to determine which real images were used in the training process by comparing the similarity between real and synthetic images. We preprocessed the images using multiple image enhancement techniques (such as Gaussian filtering, Hessian matrix, Laplacian operator, and bilateral filtering). Then, we used convolutional neural networks (CNN) and ResNet50 models to extract image features and calculate the similarity between images. Through experiments on the validation set, our similarity classification method achieved excellent accuracy and F1 score performance. In the submitted results, our best F1 score was 0.633.The best kappa score is -0.016.It proves that the method used can efectively distinguish between "used" and "unused" images. Our experimental results show that it is possible to successfully identify the real images used to generate synthetic images through image enhancement and deep learning models.Our code is available at https://github.com/Qqiiiii/ImageCLEF.git.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Image Enhancement</kwd>
        <kwd>Deep Learning</kwd>
        <kwd>CNN</kwd>
        <kwd>ResNet50</kwd>
        <kwd>Similarity Calculation</kwd>
        <kwd>Medical Imaging</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In the field of medical image analysis, deep learning models[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ][
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] have shown significant potential in
assisting diagnosis and treatment, especially in automating the analysis and interpretation of medical
images. However, training these deep learning models usually requires a large amount of data, and
obtaining high-quality medical image data often faces privacy and data sharing challenges. To address
this problem, generative models such as generative adversarial networks (GANs) have been proposed
and widely used to synthesize medical image data to enhance the diversity and quality of data sets,
thereby helping to train more powerful models.
      </p>
      <p>
        Although generative models can significantly improve data diversity when generating synthetic
images, synthetic images may inadvertently expose sensitive information in the training data during the
process of learning data distribution, thus bringing the risk of privacy leakage.[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ][
        <xref ref-type="bibr" rid="ref4">4</xref>
        ][
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] Recent studies
have shown that by analyzing the generated images, hidden "fingerprints" can be identified, which may
point to the source of the training images. Therefore, in the field of medical imaging, ensuring that
synthetic images do not leak patient privacy has become an important research topic.
      </p>
      <p>
        In this context, ImageCLEFmed GAN 2025[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ][
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] introduces a new subtask, which aims to determine
whether a specific real image has been used to train a generative model by analyzing synthetic medical
images. This task requires determining which real images have been used in the generation process by
calculating the similarity between synthetic and real images.Our team’s username is ZOQ.To address
this task, we propose a similarity classification method based on image enhancement and deep learning
models.
      </p>
      <p>
        Our method first performs multiple image enhancement processes on the image, including Gaussian
ifltering, Hessian matrix, Laplacian operator, and bilateral filtering to enhance the details and features
of the image.[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ][
        <xref ref-type="bibr" rid="ref9">9</xref>
        ][
        <xref ref-type="bibr" rid="ref10">10</xref>
        ][11][12] Then, convolutional neural network (CNN) and ResNet50 are used to
extract high-dimensional features of the image and calculate the similarity between the real image and
the synthetic image. In the experiment, we used similarity calculation based on feature extraction of
the deep learning model. This method can efectively distinguish between "used" and "unused" images.
      </p>
      <p>Experimental results on the validation set show that our similarity classification method achieves
excellent performance in both accuracy and F1 score, with the best F1 score being 0.633. This proves
that the adopted method can efectively identify real images used for training synthetic images and has
great potential for application in privacy protection and synthetic image analysis.</p>
      <p>This paper is organized as follows: Section 2 introduces the task and the dataset, Section 3 elaborates
on our proposed method, Section 4 presents and discusses the experimental results, and finally, Section
5 summarizes the contributions of this paper and proposes future research directions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. The 2025 ImageCLEFmed GAN subtask1</title>
      <p>The task introduced in ImageCLEFmed GAN 2025 aims to study whether specific real images are used
to train the generative model to generate synthetic biomedical images. Participants are required to
annotate each real image in the test set to indicate whether it was used to generate the corresponding
synthetic image. Specifically, participants need to annotate each real image as "used" (1) or "not used" (0)
to determine whether it participated in the training process of generating the image. This subtask aims
to detect "fingerprints" in synthetic images, that is, to identify whether there are real image features in
the generated image that can be traced back to the training data. The data of the training set and test
set are shown in Table 1.</p>
      <p>This task focuses on the potential privacy leakage and data security issues in the process of generating
synthetic images. It explores whether the generative model can generate synthetic images that are
highly similar to real patient images, which may lead to the leakage of training data. Participants need
to analyze the test image dataset and evaluate whether certain real images are used in the training
process of the generative model. To this end, the task’s dataset includes real and synthetic images.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Methods</title>
      <sec id="sec-3-1">
        <title>3.1. Image Preprocessing and Enhancement</title>
        <p>Gaussian filter enhancement is a common image smoothing method that can efectively reduce the
image’s noise and retain the image’s edge information. In the experiment, we applied Gaussian blur to
smooth the details of the image and enhanced the image details through histogram equalization. This
enhancement method helps the model better identify the key information in the image during image
processing.</p>
        <p>Laplacian operator enhancement is an edge detection method that uses second-order derivatives to
identify edge information in an image. In the experiment, we used the Laplacian operator to extract edge
regions in the image. This edge information plays an important role in subsequent feature extraction,
helping the model focus on the details in the image, thereby improving the accuracy of similarity
calculation.</p>
        <p>The Hessian matrix enhancement method captures the local curvature information of the image by
calculating the second-order derivative of the image, which is particularly suitable for edge and detail
enhancement. We use the Hessian matrix to calculate the edge area of the image and further enhance
the details in the image. Through this method, we can enhance the high-frequency information in the
image, making the details in the image more prominent, thereby improving the quality of subsequent
feature extraction.</p>
        <p>Bilateral filtering is a smoothing method that can efectively preserve image edges, especially for
images with complex textures. In our experiments, bilateral filtering is used to smooth areas in the
image while keeping the edges sharp. By enhancing the details of the image, bilateral filtering improves
the image’s visual efect and makes the image’s key information more obvious.</p>
        <p>These four image enhancement methods enhance the image details from diferent perspectives and
processing methods. Gaussian filtering focuses on denoising and preprocessing, the Laplacian operator
emphasizes the edge information of the image, the Hessian matrix enhances the structural features of
the image through curvature analysis, and bilateral filtering improves the details of the image through
edge-preserving smoothing. These enhancement methods cooperate with each other in the feature
extraction process, which can efectively improve the quality of the image and help the subsequent
feature extraction and similarity calculation to be more accurate. The original generated image and
the image after the four enhancement methods are shown in Figure 1. The comparison of image
enhancement methods is shown in Table 2.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Feature Extraction</title>
        <p>Feature extraction is a key step in our task. By extracting efective features from images, we can
calculate the similarity between images and perform subsequent classification. In this task, we used
convolutional neural networks (CNN) and ResNet50 models to automatically extract image features,
taking advantage of the deep learning network’s ability to extract deep patterns in images.</p>
        <p>Convolutional Neural Network (CNN) is a powerful deep learning model widely used in computer
vision tasks, especially image classification, object detection, and image segmentation. In this task,
CNN is used to automatically extract features from images.</p>
        <p>CNN extracts features from images through multiple layers of convolution and pooling operations.
The convolution layer extracts diferent features from the image, such as edges, corners, textures, etc., by
performing convolution operations with local areas of the image. The pooling layer reduces the spatial
size of the image and retains important feature information by performing dimensionality reduction
on the convolution results. During the feature extraction process, CNN can gradually extract more
advanced features through convolution and pooling operations at each layer, thereby capturing complex
patterns and details in the image. We use the optimized CNN model to extract features from the image
and use these features for subsequent similarity calculations and classification. The model consists
of multiple convolutional, activation, and pooling layers. Through optimized structure and training
methods, it can eficiently extract low-level and high-level features of the image.</p>
        <p>The advantage of CNN is that it can automatically learn features. CNN can automatically learn image
features through the back-propagation algorithm without manually designing feature extractors. It can
also connect locally and share weights. The convolution operation of CNN greatly reduces the number
of parameters by locally connecting and sharing weights, allowing the model to better handle complex
images. It is also translation invariant. The convolution operation is invariant to the translation of the
image, which means that CNN can recognize the same objects in the image regardless of their position.</p>
        <p>ResNet50 is a deep convolutional neural network, which is designed based on the concept of "Residual
Learning" and uses residual blocks to solve the gradient vanishing problem in deep networks. Due
to its depth and efective structure, ResNet50 performs very well in image classification and feature
extraction tasks, especially when processing large-scale image datasets.</p>
        <p>ResNet50 consists of a 50-layer deep convolutional neural network, which mainly uses residual blocks
to enhance the network’s expressiveness. Each residual block contains several convolutional layers but
adds "skip connections" that allow signals to jump over certain layers directly, thereby avoiding the
gradient vanishing problem that may occur in traditional deep networks. Through this residual learning
mechanism, ResNet50 can train deeper networks and capture more complex features. In the feature
extraction process, ResNet50 gradually learns the complex patterns in the image through the previous
multi-layer convolution operations and finally outputs a set of high-dimensional features through the
fully connected layer, which can efectively represent the visual information of the image. In our study,
we used a custom ResNet50 model, which was trained on the dataset and can extract high-level features
of the image.</p>
        <p>The image features extracted by CNN and ResNet50 are all high-dimensional feature vectors, which
play an important role in similarity calculation. We use these high-dimensional feature vectors as
input for subsequent similarity calculations and judge their similarity by comparing the feature vector
diferences between diferent images.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Similarity Calculation Methods</title>
        <p>In the 2025 ImageCLEFmed GAN task, the choice of similarity calculation method is crucial to evaluate
the similarity between generated images and real images. We used several common similarity calculation
methods, including cosine similarity, structural similarity index (SSIM), and Jaccard similarity. These
methods are widely used in image processing, machine learning, information retrieval, and other fields.
They can efectively measure the similarity between image features and provide strong support for
subsequent classification tasks.</p>
        <p>Cosine similarity is commonly used when calculating image similarity, particularly in content-based
image retrieval (CBIR) systems. Cosine similarity assesses the similarity between two vectors by
measuring the cosine of the angle between them. The core idea is that the closer the directions of the
two vectors, the more similar they are, regardless of their magnitudes. Calculating cosine similarity
involves converting each image into a vector form. This typically entails flattening the pixel values of
the image or features extracted from the image (such as color histograms, texture descriptors, shape
features, etc.) into a one-dimensional vector. The cosine similarity value ranges from -1 to 1, where 1
indicates identical directions (very similar), 0 indicates orthogonality (no similarity), and -1 indicates
completely opposite directions. Cosine similarity focuses on directional similarity, ignoring magnitude.
In some cases, two images might be very similar in terms of certain feature ratios, but the absolute
diferences in actual pixel values could be significant.</p>
        <p>cos( ) =</p>
        <p>A · B
‖A‖‖B‖ = √︁∑︀=1 ()2 ×</p>
        <p>Where A and B are two vectors. A · B represents the dot product of vectors A and B. ‖A‖ and
‖B‖ represent the magnitudes of vectors A and B.</p>
        <p>Structural Similarity (SSIM) is a more intuitive and efective method for calculating image similarity.
SSIM considers images’ luminance, contrast, and structural information, allowing it to more accurately
reflect the human visual system’s perception of image quality. SSIM first calculates the luminance
diference between two images. The luminance comparison is achieved by calculating the mean values
of the images, which reflects the overall brightness levels of the images. Next, SSIM calculates the
contrast diference. The contrast comparison is achieved by calculating the standard deviation of the
images; the greater the standard deviation, the higher the image contrast. Finally, SSIM compares the
structural information of the two images. This step is achieved by calculating the covariance of the
images. Covariance reflects the linear relationship between the images’ pixels, capturing the images’
structural characteristics.</p>
        <p>SSIM(, ) =</p>
        <p>(2 x ·  y + 1) (2 xy + 2)
( x2 +  y2 + 1) ( x2 +  y2 + 2)</p>
        <p>Where  and  are corresponding blocks of the two images.   and   are the mean values of image
blocks  and .   and   are the standard deviations of image blocks  and .   is the covariance of
image blocks  and .</p>
        <p>Jaccard Similarity is another commonly used method for measuring image similarity. Jaccard
Similarity is a common method for measuring the similarity between two sets, especially for evaluating the
overlap between objects or regions in an image. In image processing, Jaccard Similarity is often used to
compare the similarity between two binary images, that is, to quantify their similarity by comparing
the image’s feature regions (such as edges, targets, etc.). In image processing, Jaccard Similarity is often
used to compare the features of an image (such as edges, textures, colors, etc.) or the overlapping parts
of the pixel level of a binary image. The value of Jaccard Similarity is between 0 and 1, and the larger
the value, the more similar the two images are. For this task, we use Jaccard Similarity to evaluate
the similarity between synthetic and real images. We first convert the real and synthetic images into
high-dimensional feature vectors through a feature extraction model (ResNet50). Then, we obtain the
binary representation of the salient regions in the image by binarizing these feature vectors. We set
a threshold. When the Jaccard similarity exceeds the threshold, the real image is considered to have
participated in the generation process. Otherwise, the image is considered to have not been used. This
way, we can label the real image as "used" or "unused". The advantage of Jaccard similarity is that it
can intuitively reflect the degree of overlap of image features, especially for images or features after
binarization. Therefore, when processing synthetic images, Jaccard similarity can efectively capture the
common parts between images and help us identify similar areas between generated and real images.
In addition, Jaccard similarity is not afected by the scale or specific size of the image, so it has a certain
robustness when processing images of diferent sizes.</p>
        <p>Jaccard(, ) = | ∩ |
| ∪ |
where  and  represent the sets of feature regions (such as edges, textures, or salient regions) of the
two images, | ∩ | is the number of common elements (intersection) between the sets, and | ∪ | is
the total number of elements in the union of the two sets.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experiments</title>
      <sec id="sec-4-1">
        <title>4.1. Evaluation Metrics</title>
        <p>We use the following evaluation indicators to evaluate the model’s performance: Kappa coeficient,
accuracy, precision, recall, and F1 score. Since the Kappa coeficient can efectively measure the
consistency between the model prediction and the true label, especially in the case of class imbalance,
we use the Kappa coeficient as the main evaluation indicator. At the same time, the F1 score is used to
provide a balanced evaluation of precision and recall. The definitions of these metrics are as follows:</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Experimental Results</title>
        <p>We used two diferent models (CNN and ResNet50) and three similarity calculation methods (cosine
similarity, SSIM, and Jaccard similarity) to conduct experiments, and the experimental results of diferent
models and similarity calculation methods are shown in Table 3. We pay special attention to the Kappa
coeficient because it can comprehensively evaluate the classification consistency of the model.</p>
        <p>Our team submitted 5 results, with the best kappa value of -0.016 and the best F1 score of 0.633.The
results show that the Kappa coeficient of the CNN model is low, indicating that the model’s predictions
are less consistent with the actual labels. When using SSIM similarity, although the Kappa coeficient
has improved, it still shows that the model has greater uncertainty when processing data. This may
be related to the dataset’s deviation or the model’s overfitting. The ResNet50 model has the smallest
Kappa coeficient in all experiments, especially under the Jaccard similarity calculation. The Kappa
value is close to 0, indicating that its classification results are less consistent with the actual labels.
Cosine similarity performs well on the CNN model. Although the Kappa coeficient is low, it performs
well regarding recall, indicating that this method can efectively detect "used" images. SSIM similarity
is better than cosine similarity, especially on the CNN model, which can balance precision and recall
and obtain more stable results. SSIM considers the image’s structural information and may be more
suitable for such tasks. Jaccard similarity has the worst efect on ResNet50, with an F1 score of 0.448
and a negative Kappa coeficient, indicating that this method may not be suitable for such tasks.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>This paper proposes a method that combines image enhancement with deep learning models to detect
the "fingerprint" of training data in synthetic biomedical images. We evaluate the similarity between
real and synthetic images using cosine similarity, SSIM, and Jaccard similarity. Experimental results
show that the CNN model performs best when combined with SSIM similarity calculation. Although the
performance of the ResNet50 model is weak, it still has the potential for optimization. Future research
can optimize the similarity calculation method, further improve the feature extraction ability of the
deep learning model, and enhance the performance and robustness of the model. This study provides
an efective method for the privacy protection of synthetic images and has good application prospects.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work Chat-GPT-4o and Grammarly were used to check grammar and
spelling. After using this tool, the author reviewed and edited the content as needed and takes full
responsibility for the publication’s content.
[11] P. Ma, H. Yuan, Y. Chen, H. Chen, G. Weng, Y. Liu, A laplace operator-based active contour model
with improved image edge detection performance, Digital Signal Processing 151 (2024) 104550.
[12] N. S. Awarayi, F. Twum, J. B. Hayfron-Acquah, K. Owusu-Agyemang, A bilateral filtering-based
image enhancement for alzheimer disease classification using cnn, Plos one 19 (2024) e0302358.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tsuneki</surname>
          </string-name>
          ,
          <article-title>Deep learning models in medical image analysis</article-title>
          ,
          <source>Journal of Oral Biosciences</source>
          <volume>64</volume>
          (
          <year>2022</year>
          )
          <fpage>312</fpage>
          -
          <lpage>320</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D.</given-names>
            <surname>Shen</surname>
          </string-name>
          , G. Wu,
          <string-name>
            <surname>H.-I. Suk</surname>
          </string-name>
          ,
          <article-title>Deep learning in medical image analysis</article-title>
          ,
          <source>Annual review of biomedical engineering 19</source>
          (
          <year>2017</year>
          )
          <fpage>221</fpage>
          -
          <lpage>248</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Bellovin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Dutta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Reitinger</surname>
          </string-name>
          , Privacy and synthetic datasets,
          <source>Stan. Tech. L. Rev</source>
          .
          <volume>22</volume>
          (
          <year>2019</year>
          )
          <article-title>1</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Xie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Xiong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ying</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. V.</given-names>
            <surname>Vasilakos</surname>
          </string-name>
          ,
          <article-title>Privacy and security issues in deep learning: A survey</article-title>
          ,
          <source>IEEE Access 9</source>
          (
          <year>2020</year>
          )
          <fpage>4566</fpage>
          -
          <lpage>4593</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Kuang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Fang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Babaguchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Fan</surname>
          </string-name>
          ,
          <article-title>Unnoticeable synthetic face replacement for image privacy protection</article-title>
          ,
          <source>Neurocomputing</source>
          <volume>457</volume>
          (
          <year>2021</year>
          )
          <fpage>322</fpage>
          -
          <lpage>333</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.-G.</given-names>
            <surname>Andrei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. G.</given-names>
            <surname>Constantin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dogariu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Radzhabov</surname>
          </string-name>
          , L.
          <string-name>
            <surname>-D. Ştefan</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Prokopchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Kovalev</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <article-title>M"uller</article-title>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ionescu</surname>
          </string-name>
          , Overview of ImageCLEFmedical 2025 - GANs Task, in: CLEF2025 Working Notes, CEUR Workshop Proceedings, CEUR-WS.org, Madrid, Spain,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>B.</given-names>
            <surname>Ionescu</surname>
          </string-name>
          ,
          <string-name>
            <surname>H.</surname>
          </string-name>
          <article-title>M"uller</article-title>
          , D.-
          <string-name>
            <given-names>C.</given-names>
            <surname>Stanciu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.-G.</given-names>
            <surname>Andrei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Radzhabov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Prokopchuk</surname>
          </string-name>
          , Ştefan, LiviuDaniel, M.-G. Constantin,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dogariu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kovalev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Damm</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. R</surname>
          </string-name>
          "uckert,
          <string-name>
            <given-names>A. Ben</given-names>
            <surname>Abacha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Garc'ia Seco de Herrera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Friedrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bloch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Br</surname>
          </string-name>
          <article-title>"ungel, A</article-title>
          .
          <string-name>
            <surname>Idrissi-Yaghir</surname>
            ,
            <given-names>H. Sch</given-names>
          </string-name>
          <article-title>"afer, C. S</article-title>
          . Schmidt,
          <string-name>
            <given-names>T. M. G.</given-names>
            <surname>Pakull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bracke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Pelka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Eryilmaz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Becker</surname>
          </string-name>
          , W.-W. Yim,
          <string-name>
            <given-names>N.</given-names>
            <surname>Codella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Novoa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Malvehy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Dimitrov</surname>
          </string-name>
          ,
          <string-name>
            <surname>R. J. Das</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          <string-name>
            <surname>Xie</surname>
            ,
            <given-names>H. M.</given-names>
          </string-name>
          <string-name>
            <surname>Shan</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Nakov</surname>
            , I. Koychev,
            <given-names>S. A.</given-names>
          </string-name>
          <string-name>
            <surname>Hicks</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Gautam</surname>
            ,
            <given-names>M. A.</given-names>
          </string-name>
          <string-name>
            <surname>Riegler</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Thambawita</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Halvorsen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Fabre</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Macaire</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Lecouteux</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Schwab</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Potthast</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Heinrich</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Kiesel</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Wolter</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Stein</surname>
          </string-name>
          , Overview of imageclef 2025:
          <article-title>Multimedia retrieval in medical, social media and content recommendation applications, in: Experimental IR Meets Multilinguality</article-title>
          , Multimodality, and
          <string-name>
            <surname>Interaction</surname>
          </string-name>
          ,
          <source>Proceedings of the 16th International Conference of the CLEF Association (CLEF</source>
          <year>2025</year>
          ), Springer Lecture Notes in Computer Science LNCS, Madrid, Spain,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>G.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mittal</surname>
          </string-name>
          , et al.,
          <article-title>Various image enhancement techniques-a critical review</article-title>
          ,
          <source>International Journal of Innovation and Scientific Research</source>
          <volume>10</volume>
          (
          <year>2014</year>
          )
          <fpage>267</fpage>
          -
          <lpage>274</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.</given-names>
            <surname>Nandan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kanungo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mahajan</surname>
          </string-name>
          ,
          <article-title>An error-eficient gaussian filter for image processing by using the expanded operand decomposition logarithm multiplication</article-title>
          ,
          <source>Journal of ambient intelligence and humanized computing 15</source>
          (
          <year>2024</year>
          )
          <fpage>1045</fpage>
          -
          <lpage>1052</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Lavín-Delgado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Solís-Pérez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gómez-Aguilar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Razo-Hernández</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Etemad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Rezapour</surname>
          </string-name>
          ,
          <article-title>An improved object detection algorithm based on the hessian matrix and conformable derivative</article-title>
          ,
          <source>Circuits, Systems, and Signal Processing</source>
          <volume>43</volume>
          (
          <year>2024</year>
          )
          <fpage>4991</fpage>
          -
          <lpage>5047</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>