<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Conference and Labs of the Evaluation Forum, September</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>DMK-SSN at ImageCLEF 2023 Medical: Controlling the Quality of Synthetic Medical Images Created via GANs using Machine Learning and Image Hashing Techniques⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dhivya Subburam</string-name>
          <email>dhivyas@ssn.edu.in</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Shriram M SathyaNarayanan</string-name>
          <email>shriram2010160@ssn.edu.in</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bhavana Anand</string-name>
          <email>bhavana2110584@ssn.edu.in</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kavitha Srinivasan</string-name>
          <email>kavithas@ssn.edu.in</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mohanavalli Subramaniam</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Sri Sivasubramaniya Nadar College of Engineering</institution>
          ,
          <addr-line>Kalavakkam, Tamil Nadu</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>1</volume>
      <fpage>8</fpage>
      <lpage>21</lpage>
      <abstract>
        <p>ImageCLEF, a platform dedicated to the evaluation and advancement of visual media analysis, has conducted a novel challenge in its medical track titled Controlling the Quality of Synthetic Medical Images created via GANs. This task investigates the hypothesis whether Generative Adversarial Networks (GANs) generated synthetic images retain distinct characteristics indicative of the real images used for training. This research aims to detect these characteristics and analyze the relationship between real and artificial biomedical image datasets. The task focuses on analyzing test image datasets to identify the presence of real images in the training process. The development dataset includes both artificial and real images of lung tuberculosis patients. Our team submitted two methods for evaluation: a CNN model achieved an F1 score of 0.4804, while the SIFT-KNN approach obtained an F1 score of 0.449. In addition to the models stated above, the pHash approach was applied which enabled to map real images to their corresponding synthetic counterparts. These findings provide valuable insights into detecting real image characteristics in synthetic biomedical datasets, highlighting the importance of addressing security concerns when employing GANs for medical image generation.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;ImageCLEF2023</kwd>
        <kwd>Lung CT axial slices</kwd>
        <kwd>Image Hashing</kwd>
        <kwd>Convolutional Neural Network</kwd>
        <kwd>SIFT+KNN</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        ImageCLEF is an evaluation forum organized annually that encompasses research tasks oriented
towards image analysis and cross-language annotation. ImageCLEF 2023 [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], focussed on various
challenges which aims towards the amelioration of research contributions in visual analysis,
annotation, classification, and retrieval tasks. The medical based tasks has been included
from the second edition of the ImageCLEF under the tag ImageCLEFMedical[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] which annually
hosted several medical domain based tasks for major achievements since 2004. Amongst the four
tasks proposed for the year 2023, evaluating the synthetic images generated by the Generative
adversarial Networks (GANs) [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] is indeed a completely new challenge in the track. The objective
mainly focussed on evaluating the prevalent theory that the GANs produce synthetic images
with the fingerprints of the real images employed during the GANs training. Thus, the goal of
this task is to identify whether a GAN generated image do possess the fingerprint of the real
image, if so then the synthetic biomedical images are also subject to the restrictions for usage
and sharing similar to the actual real data. However, if the hypothesis is incorrect, then GANs
could be used for data augmentation in order to produce extensive biomedical databases that
are exempted from any privacy or ethical restrictions. This document illustrates DMK-SSN’s
participation in the ImageCLEF medical 2023 for the task Controlling the Quality of Synthetic
Medical Images created via GANs[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Since the discovery of x-rays, medical imaging has achieved significant advancements in
medicine and fundamentally revolutionised the way diseases are identified and treated[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ][
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
The most common imaging modalities used to study the internal workings of the human body
are computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound scanning.
Shallow and deep learning techniques [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] have advanced dramatically in discriminative tasks
across the key facets of healthcare because of the development of Computer Aided Diagnosis
(CADs). One of the biggest dificulties encountered while applying these algorithms for disease
diagnosis is the lack of patient datasets for model training. This presents a major risk, particularly
in disease diagnosis. Additionally, models struggle to learn the underlying pattern in the datasets
during the training phase if there is an uneven distribution of the classes. To overcome these
challenges, several techniques are employed to scale up the databases. Traditional approaches
involved geometric transformations such as rotation, scaling, translation, flipping, shearing
and other approaches. However, deep model-based image synthesis can be achieved using
GANs and its variations. In the following, we first describe related works involved in image
augmentation and their impact on classification using deep models, to compare fingerprints
between two images and image hashing approaches in Section 2, followed by the description
of the dataset provided for ImageCLEF medical 2023 for the task Controlling the Quality of
Synthetic Medical Images created via GANs in Section 3. In Section 4, we describe the details of
methods employed, and Section 5 describes the experiments and results. Section 6 elucidates
the conclusion for this task.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Works</title>
      <p>
        Deep learning [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] have led the GAN has a lot of potential for medical imaging challenges
because of its capacity to produce high quality, realistic, factual images. Several pioneering
papers such as [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]have demonstrated the efectiveness of GANs in tasks such as image
generation, data augmentation, and quality control. In paper [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] utilized deep convolution
GANs for data augmentation in CT images of 182 liver lesions (53 cysts, 64 metastases and
65 haemangiomas). In paper [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] have employed data augmentation for an improved lymph
node segmentation in the CT images. Several works have shown an improved results in the
segmentation and classification of images or lesions. In general, GANs include two networks,
a discriminator which discriminates a real image from the generated image and a generator
network which is capable to produce synthetic images from random noise by incorporating
the feedback images. Thus, if the images are produced by learning the underlying patterns,
then the generated images completely mimic the real images. To identify the relationship
between a real image and the synthetic image can be identified by analysing the fingerprints
of the two images. This could be achieved using certain hash functions on both the image. By
employing these techniques, the ideal outcome is to create a distinct fingerprint or the hash
value pertaining to that image. This assists to enable a quick search for the related generated
images from the dataset. Thus, the binary values which are part of the hash values indicates
which real image corresponds to the synthetic images. The GAN image hash and the Real image
hash are compared using the hamming distance. The Hamming distance counts how many bits
in two hash values’ corresponding places are diferent from one another[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Greater similarity
between the images is suggested by a smaller Hamming distance, whilst greater dissimilarity is
suggested by a higher Hamming distance. Instead of stating the criterion directly, we determine
the pair with the smallest hamming distance and mark it as similar.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Datasets</title>
      <p>For the ImageCLEF 2023, for the task Controlling the Quality of Synthetic Medical Images
created via GANs , the organizers shared us the benchmarking image set collected from 8000
lung tuberculosis patients. The imaging modality is CT, and the dataset comprises of axial slices
of 3D CT scans obtained from those patients. Amongst them, there are normal and as well with
lung lesions, or even more serious stages. The images are of 256×256 pixels and are in PNG
ifle format. Difuse neural networks are employed in the generation of synthetic images which
are of same resolution similar to the real images. The initial development dataset comprised
500 synthetic images, around 80 images which were not used for the training of generative
models and 80 real images used for training the generative models. However, the test dataset
also included images using the generative models. In the test dataset, the synthetic images are
10,000 and the real images composed of 200 images, where the images are not segregated as
used for generative and not used for generative training. Figure 1 illustrates a sample image
from all three classes of generated, used and not used.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Methods</title>
      <p>
        With the development dataset shared as a part of the ImageCLEF Medical task we analysed
the fingerprints between the two images, i.e generated image and the real images. The unique
ifngerprints were identified using the Perceptual Hashing algorithm illustrated in Figure 2.
The pHash algorithm [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] was applied on the generated images, which produces a hash, a
ifngerprint of the image. Similarly, hashes for the Used and Unused images were generated.
Note, the Used and Unused images are a part of the training dataset. Upon finding the hashing,
hamming distance was computed between the used images and the GAN images. Once the
hamming distance was computed, we check the closest pair and declare it to form a pair of
source image and generated image. This observation was used to check the pattern of hamming
distance between source and generated images. It was evident that the hamming distance values
circled around 50. With this observation, we perform the same algorithm on the test datasets
that consists of the real and generated images. Here upon calculating the minimum distance,
we pass a threshold of 50 as observed. The source image will be found for anything below
the threshold and the real image will be declared Unused if the minimum distance is more
than the threshold. This approach is used as a proof of concept to check whether the images
are appropriately classified as used or unused. The primary models in this section include a
Convolutional Neural Network and a Scale-Invariant Feature Transform (SIFT) algorithm with
the K-Nearest Neighbors (KNN) classifier. The hashing technique is a means to verify the results
and acknowledge the outputs from these classifiers.
      </p>
      <sec id="sec-4-1">
        <title>4.1. Model 1 : Convolutional Neural Network</title>
        <p>The first model devised for this task is based on a Convolutional Neural Network (CNN)
architecture. CNNs are widely used for image classification tasks due to their ability to efectively
capture spatial features from images. The model begins by loading and preprocessing the images.
Resizing the images to a uniform size ensures that all images have the same dimensions, which
is essential for further processing. The images are resized to a specific width and height, in this
case, 64×64 pixels. By resizing the images, we eliminate any variations in size that could hinder
the training process. After resizing, the training images are divided into two categories: used
images and unused images. These categories are represented by the labels 1 and 0, respectively
and are finally concatenated as train labels. This prepares the data for training the CNN model.
The CNN model architecture consists of three convolutional layers, each followed by a max
pooling layer. Convolutional layers apply filters to the input images, capturing local patterns and
features. The max pooling layers downsample the output of the convolutional layers, reducing
the spatial dimensions and retaining the most important features. This helps in reducing the
computational complexity of the model. After the convolutional and max pooling layers, the
output is flattened and passed through two dense layers. The flattened output is fed into the
dense layers, which are fully connected layers. The activation function used in the convolutional
layers and the first dense layer is the Rectified Linear Unit (ReLU) function, which introduces
non-linearity and helps in capturing complex relationships in the data. The final dense layer
uses a sigmoid activation function, which squashes the output to a value between 0 and 1,
representing the probability of the image belonging to the used category. During training, the
model is fine-tuned by attempting to minimize the binary cross entropy loss. This loss function
measures the dissimilarity between the predicted probabilities and the true labels. The optimizer
used for minimizing the loss is ADAM, which is an eficient optimization algorithm commonly
used for training deep neural networks. In the testing phase, a threshold of 0.5 is used to convert
the predicted probabilities into binary predictions. If the predicted probability is greater than or
equal to 0.5, the image is classified as used (1), otherwise, it is classified as unused (0). These
predictions are then printed out. The use of a CNN model along with appropriate preprocessing
techniques and activation functions allows for the efective classification of images into used
and unused categories. By training the model on a labeled dataset and fine-tuning it with
optimization techniques, the model can learn to distinguish between the features of used and
unused images. The resulting predictions provide insights into whether an unknown image
was used to generate GAN images or not. The proposed models are depicted in Figure 3.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Model 2 : Scale-Invariant Feature Transform (SIFT) + K-Nearest Neighbors (KNN)</title>
        <p>The SIFT + KNN model is a powerful technique for image recognition that combines the
ScaleInvariant Feature Transform (SIFT) algorithm with the K-Nearest Neighbors (KNN) classifier.
This model leverages the strengths of both algorithms to achieve accurate and eficient image
classification. The SIFT algorithm plays a crucial role in the model by extracting robust and
distinctive features from images. It identifies key points that are invariant to scale, rotation,
and afine transformations. These keypoints are characterized by their scale, orientation, and
local descriptors that capture the image’s appearance in their local neighborhoods. The SIFT
algorithm provides robustness to various challenges such as changes in lighting conditions,
partial occlusions, and viewpoint variations. The KNN classifier, on the other hand, serves as
the classification mechanism in the model. Given a set of labeled training samples with their
corresponding feature vectors extracted using SIFT, the KNN classifier assigns labels to new,
unseen samples based on their proximity to the training samples in the feature space. It employs
the majority voting scheme among the K nearest neighbors to determine the label of the input
sample. In the training phase, the SIFT algorithm was first applied to the GAN images, used
images and the unused images. The features were clustered accordingly to create a vocabulary
of visual words. Similarly SIFT features were extracted for the test images. This was then passed
through our classifier (KNN) to perform the required classification.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Model 3 : Hashing techniques for verification</title>
        <p>An improvement in the given tasks can be made by finding the source image from the set of
real images for the GAN images provided. This can be done using hashing techniques, one
such being perceptual hashing. This model starts by loading in the given test images, both
real and GAN. Perceptual hashing is a technique where Discrete Cosine Transform is used to
convert the image into a frequency domain. The low frequency components are then used to
construct hash values. This approach makes pHash robust to changes in scale, rotations and
minor modifications in the image. This ideally aims at generating a unique fingerprint (hash
value) for the images which allows eficient searching and matching of similar images from our
dataset. The hash values generated by pHash are binary strings that represent the perceptual
content of the images. Once the hash values are computed and stored, hamming distance
between the GAN image hash and Real image hash is performed. The Hamming distance
measures the number of bit positions at which the corresponding bits in two hash values difer.
A lower Hamming distance indicates a higher similarity between the images, while a higher
Hamming distance suggests greater dissimilarity. As mentioned earlier in the paper, a threshold
of 50 was used to check if it is a used or unused image. Upon identification the source images
are found for the Used images. This approach allows for studying the connections between
the generated and real images, shedding light on the efectiveness of the SIFT+KNN and CNN
models in determining the source images</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Experiments and Results</title>
      <p>In this section, the proposed models are implemented, and the corresponding performance
metrics are discussed.</p>
      <sec id="sec-5-1">
        <title>5.1. System specification</title>
        <p>The proposed model is executed on a workstation with a six core Intel processor of 3.9GHZ,
32GB DDR4 RAM and NVIDIA GEFORCE RTX 11GB GPU.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Results of the proposed methods</title>
        <p>The obtained results are illustrated in Table 1. The submission 1 is based on the CNN model,
which involves a 3 layered CNN to classify the used and unused images. The model is trained
for 10 epochs using the Adam optimizer, and the loss function used is binary cross entropy..
With this model, we achieved an accuracy of 0.535 and a precision of 0.544. The specificity and
recall are obtained as 0.64 and 0.43. For submission 2, the SIFT + KNN model was used and this
model achieved an accuracy of 0.61, precision of 0.512, specificity of 0.6 and a recall of 0.4 as
their performance metrics. Apart from the above models, as mentioned earlier in the paper,
pHash technique was used. This technique helped us with the mapping of source images to
their generated counterparts as in Figure 4. Below are some of the outputs from the pHash
model. The first one was the output when the used set of images from the training dataset
was used along with the GAN images. The second one shows the same content for the real
images. The images with “None” value show that they were unused in the generation of images
illustrated in Figure 5.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>In this paper, we proposed two approaches using CNN and SIFT+KNN to classify the used and
not used images for the generation of synthetic images. The F1 score of the first approach is
0.4804 while that of the second approach is 0.449. In terms of accuracy we find that the SIFT
and KNN approach has an value of 0.61 while that of the CNN model is 0.535. Along with
that, an image hashing technique, which identifies the fingerprint of the images through hash
values is discussed. The hamming distance is calculated and based on the value, the generated
images can be mapped to their appropriate real images. This assists us to segregate the used
and unused images for the training of the difusion network. In future work, these images can
be experimented with using one-shot or few-shot learning, which assists in achieving faster
and more eficient classification of generated images using diferent imaging modalities.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B.</given-names>
            <surname>Ionescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Drăgulinescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Yim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Ben</given-names>
            <surname>Abacha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Snider</surname>
          </string-name>
          , G. Adams,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yetisgen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Rückert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Garcıa Seco de Herrera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Friedrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bloch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Brüngel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Idrissi-Yaghir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Schäfer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Hicks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Riegler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Thambawita</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Storås</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Halvorsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. J. A. A. A. R. I. C. V. K. A. S. G. I. Nikolaos</given-names>
            <surname>Papachrysos</surname>
          </string-name>
          , Johanna Schöler,
          <string-name>
            <given-names>H.</given-names>
            <surname>Manguinhas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Ştefan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. G.</given-names>
            <surname>Constantin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dogariu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Deshayes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Popescu</surname>
          </string-name>
          , Overview of ImageCLEF 2023:
          <article-title>Multimedia retrieval in medical, socialmedia and recommender systems applications</article-title>
          , in: Experimental IR Meets Multilinguality, Multimodality, and
          <string-name>
            <surname>Interaction</surname>
          </string-name>
          ,
          <source>Proceedings of the 14th International Conference of the CLEF Association (CLEF</source>
          <year>2023</year>
          ), Springer Lecture Notes in Computer Science LNCS, Thessaloniki, Greece,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Andrei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Radzhabov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Coman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kovalev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ionescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          , Overview of ImageCLEFmedical GANs 2023 task
          <article-title>- Identifying Training Data "Fingerprints" in Synthetic Biomedical Images Generated by GANs for Medical Image Security</article-title>
          , in: CLEF2023 Working Notes, CEUR Workshop Proceedings, CEUR-WS.org, Thessaloniki, Greece,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>I. Goodfellow</surname>
          </string-name>
          ,
          <article-title>Nips 2016 tutorial: Generative adversarial networks</article-title>
          ,
          <source>arXiv preprint arXiv:1701.00160</source>
          (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>O. H.</given-names>
            <surname>Karatas</surname>
          </string-name>
          , E. Toy,
          <article-title>Three-dimensional imaging techniques: A literature review</article-title>
          ,
          <source>European journal of dentistry 8</source>
          (
          <year>2014</year>
          )
          <fpage>132</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>K.</given-names>
            <surname>Aruleba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Obaido</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ogbuokiri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. O.</given-names>
            <surname>Fadaka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Klein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. A.</given-names>
            <surname>Adekiya</surname>
          </string-name>
          , R. T. Aruleba,
          <article-title>Applications of computational methods in biomedical breast cancer imaging diagnostics: A review</article-title>
          ,
          <source>Journal of Imaging</source>
          <volume>6</volume>
          (
          <year>2020</year>
          )
          <fpage>105</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Dhivya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. Jenifer</given-names>
            <surname>Anjali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mohanavalli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Sripriya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Srinivasan</surname>
          </string-name>
          ,
          <article-title>Investigations of shallow and deep learning algorithms for tumor detection</article-title>
          ,
          <source>in: 2020 IEEE-HYDCON</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          . doi:
          <volume>10</volume>
          .1109/HYDCON48903.
          <year>2020</year>
          .
          <volume>9242888</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Y.</given-names>
            <surname>LeCun</surname>
          </string-name>
          , Y. Bengio, G. Hinton,
          <article-title>Deep learning</article-title>
          , nature
          <volume>521</volume>
          (
          <year>2015</year>
          )
          <fpage>436</fpage>
          -
          <lpage>444</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Dhivya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mohanavalli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Karthika</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shivani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mageswari</surname>
          </string-name>
          ,
          <article-title>Gan based data augmentation for enhanced tumor classification</article-title>
          ,
          <source>in: 2020 4th International Conference on Computer, Communication and Signal Processing (ICCCSP)</source>
          , IEEE,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Esteva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kuprel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Novoa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Swetter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. M.</given-names>
            <surname>Blau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Thrun</surname>
          </string-name>
          ,
          <article-title>Dermatologistlevel classification of skin cancer with deep neural networks</article-title>
          ,
          <source>nature</source>
          <volume>542</volume>
          (
          <year>2017</year>
          )
          <fpage>115</fpage>
          -
          <lpage>118</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>W.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-H.</given-names>
            <surname>Xue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <article-title>Deep learning for single image super-resolution: A brief review</article-title>
          ,
          <source>IEEE Transactions on Multimedia</source>
          <volume>21</volume>
          (
          <year>2019</year>
          )
          <fpage>3106</fpage>
          -
          <lpage>3121</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Frid-Adar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Diamant</surname>
          </string-name>
          , E. Klang,
          <string-name>
            <given-names>M.</given-names>
            <surname>Amitai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Goldberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Greenspan</surname>
          </string-name>
          ,
          <article-title>Gan-based synthetic medical image augmentation for increased cnn performance in liver lesion classification</article-title>
          ,
          <source>Neurocomputing</source>
          <volume>321</volume>
          (
          <year>2018</year>
          )
          <fpage>321</fpage>
          -
          <lpage>331</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Y.-X.</given-names>
            <surname>Tang</surname>
          </string-name>
          , Y.-
          <string-name>
            <given-names>B.</given-names>
            <surname>Tang</surname>
          </string-name>
          , M. Han,
          <string-name>
            <surname>J</surname>
          </string-name>
          . Xiao,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Summers</surname>
          </string-name>
          ,
          <article-title>Abnormal chest x-ray identification with generative adversarial one-class classifier</article-title>
          ,
          <source>in: 2019 IEEE 16th international symposium on biomedical imaging (ISBI</source>
          <year>2019</year>
          ), IEEE,
          <year>2019</year>
          , pp.
          <fpage>1358</fpage>
          -
          <lpage>1361</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Shaik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. K.</given-names>
            <surname>Karsh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Islam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. H.</given-names>
            <surname>Laskar</surname>
          </string-name>
          ,
          <article-title>A review of hashing based image authentication techniques</article-title>
          ,
          <source>Multimedia Tools and Applications</source>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>28</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>