<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Tunisian-Algerian Conference on applied Computing, December</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Assessing the Generalizability of Deep Learning-Based Compression Techniques for Multibodypart X-ray Medical Images: A Comparative Study</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Amina Fettah</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rafik Menassel</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Abdeljalil Gattal</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Mathemathics and Informatics, Laboratory of Mathematics, Informatics and System (LAMIS), Echahid Cheikh Larbi Tebessi University Tebessa</institution>
          ,
          <country country="DZ">Algeria</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Mathemathics and Informatics, Laboratory of Vision and Artificial Intelligence (LAVIA), Echahid Cheikh Larbi Tebessi University</institution>
          ,
          <addr-line>Tebessa</addr-line>
          ,
          <country country="DZ">Algeria</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>1</volume>
      <fpage>7</fpage>
      <lpage>18</lpage>
      <abstract>
        <p>For efective storage and transmission of healthcare data, eficient compression of medical images became necessary. Deep learning outperformed traditional image compression in terms of size reduction and image quality maintaining. However,this study is an extension of previously applied deep learning based compression techniques including an Autoencoder, a Deep Convolutional Autoencoder, originally on an x-ray medical imaging dataset (MXID), to OPEN-I and JSRT datasets with diferent characteristics. The main goal of this research is an evaluation of the generalizability of the diferent models across these datasets, and a comprehensive evaluation of the models' performance, highlighting the impact of the tree models on other dataset training in which remarkable ifndings achieved on larger datasets, while acceptable ones on smaller datasets. This promising contributions was intended for an eficient medical image compression by preserving the images quality, reducing its size, and minimize data loss while preserving image resolution which is crucial for healthcare domain, highlighting the influence of dataset size on compression performance.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Deep Convolutional Autoencoder</kwd>
        <kwd>Medical Image Compression</kwd>
        <kwd>Deep Learning</kwd>
        <kwd>MXID x-ray Dataset</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In contemporary healthcare, medical imaging plays a pivotal role by delivering important information
about a patient’s health. Transmission, analysis, and storage of MRIs, CT scans, and X-rays modalities
presents significant challenges in file size and the need of reliable speed network, also in risk of losing
crucial details which efects diagnostic accuracy. Another challenge is the volume data of these
highresolution images requires scalable and robust solutions. by focusing on the reduction of file size, lossy
compression introduces some degradation in image quality nevertheless it significantly reduces image
size [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. It is essential to deploy efective compression strategies that reduce the size of data while
preserving the diagnostically regions of medical imaging to address the presented challenges. The
concept of Autoencoders (AEs) were originally introduced by LeCun in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. An AE comprises two
neural network components: an encoder and a decoder as indicated in Figure 1. Furthermore, these
two components are used in the convolutional autoencoder (CAE) which is an extension of the AE
that employs convolutional neural networks (CNNs). However, CNNs consists of convolutional layers
and pooling layers, convolutional layers comprise multiple nodes that process 2D feature maps, the
learnable parameters within these layers are the elements of the filter matrices [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        Maintaining a balance between image quality and compression eficiency is highly required in medical
image field. Deep learning based methods compresses the images into an eficient latent representation
that can be reconstructed providing a comparable result and a considerable achievement in the field
of medical image compression while providing a balance between compression eficiency and high
compressed image quality as discussed in [
        <xref ref-type="bibr" rid="ref12 ref4">4</xref>
        ] [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        The deep learning based image compression techniques applied to MXID X-ray dataset which was
assembled to encompass 18 distinct body parts of anatomical structures from Abdomen to lung [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ],
was extended and evaluated in this study to two additional datasets OPEN-I,and JSRT for a broader
evaluation of the technique’s efectiveness. In compressing these images through our experiments,
an exceptional performance demonstrated by the DCAE model on MXID, and OPEN-I datasets while
preserving high-quality reconstructions, as evaluated by several metrics such as the Mean Squared
Error (MSE), and Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM). In this paper,
the application of the several deep learning based compression techniques to the JSRT dataset and the
Open-I dataset remains under-explored. This study aims to address this gap by applying our previously
developed models on these datasets that include a broad selection of X-ray images with a more diferent
characteristics. Furthermore, to assess the models’ robustness and generalizability on smaller and larger
datasets, the techniques applied to these two additional datasets to compress a larger variety of medical
imaging datasets while maintaining high image quality for accurate diagnosis.
      </p>
      <p>This paper is organized as follows: Section 2 presents related work in image compression based
autoencoder models. Section 3 details the methodology and results. Section 4 reviews the results and
discussion. Finally, Section 5 conclusion and for future research.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Works</title>
      <p>Recent works on autoencoder-based approaches dedicated to medical images compression, especially
convolutional autoencoders (CAE) which preserves crucial diagnostic features by learning compact
representations achieving an eficient image compression results.</p>
      <p>
        To begin with, Mishra et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] proposed a two-stage autoencoder framework for lossy compression
of malaria-infected red blood cell images. For extracting unique features for image reconstruction, the
method uses a residual-based dual autoencoder network which incorporates color structural similarity
to maintain the quality of chrominance information, which is crucial for medical image analysis. This
method demonstrated high performance in the diferent metrics such as Color SSIM, PSNR, And
MSSSIM. Moreover, dual autoencoders and residual learning highlights the potential of deep learning
techniques for eficient medical image compression. On the other hand, Saravanan and Juliet [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
propose a deep autoencoder architecture for medical image reconstruction using a Deep Boltzmann
Machine. Their aim was the improvement of reconstructed images quality by the power of ity reduction,
their results attains high performance, lower reconstruction error and faster convergence. Eficient
pre-trained Deep Boltzmann Machine lead to efective learning process in this work. Moving to Cheng
et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] , their research compares the performance of three deep learning architectures Convolutional
Autoencoders (CAEs), Generative Adversarial Networks (GANS), and super-resolution (SRCNN) for
image compression. However, SR-based compression leverages machine learn-ing based super resolution
iflters and BPG algorithms achieves the best coding performance. On the other hand, CAEs demonstrates
their ability in the extraction of compact image features which is a promise tool for image compression
domain. Concluding with M. Venugopal and K. Palanisamy, "Wavelet based Convolutional Autoencoder
for Medical Image Compression" [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. In this study, authors aim to increase the compression ratio
whereas preserving diagnostic information for a high-quality reconstruction. integrate Convolutional
Autoencoders (CAEs) with wavelet transforms as a hybrid technique for medical image compression.
This approach uses the wavelet transform’s multi-resolution analysis capability to decompose medical
images into diferent frequency components, which are then encoded using the CAE. Wavelet-based
Convolutional Autoencoder (W-CAE) outperforms standard CAEs and traditional techniques by an
eficient compression quality.
      </p>
      <p>
        On another research, and using convolutional neural networks (CNNs) and wavelet transformation on
MRI, x-ray, and CT scans, Shukla et al [
        <xref ref-type="bibr" rid="ref13">12</xref>
        ] proposed an approach that compresses medical images using
three hidden layer network, which outperformed lossy and lossless image compression techniques.
      </p>
      <p>The following sections provide an overview of the used datasets, used evaluation metrics, the achieved
results, and the discussion part. Subsequently, we explore the potential of the DCAE on the diferent
medical imaging datasets and its performance evaluation and discussion.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <sec id="sec-3-1">
        <title>3.1. Dataset</title>
        <sec id="sec-3-1-1">
          <title>3.1.1. MXID dataset</title>
          <p>This section is organized as follows: datasets used, performance evaluation metrics chosen, and results
and discussion.</p>
          <p>
            MXID dataset [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ] is a collection of 6,869 high-resolution X-ray images with 1024x1024 pixels size from
AOUINET Hospital in Tébessa, Algeria. Classified manually into 18 distinct body parts ranging from
the lung and abdomen to the wrist and pelvic basin.
          </p>
          <p>the DCAE model was applied on the following additional datasets:</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>3.1.2. OPEN-I dataset</title>
          <p>
            OPEN-I [
            <xref ref-type="bibr" rid="ref14">13</xref>
            ] is a de-identified Indiana chest X-ray collection from the Indiana Network for Patient Care
consists of 8121 associated images and 3996 radiology reports. Authors performed de-identification on
reports and images, automatic de-identification of images required manual verification and was not
perfect, the reports’ manual coding resulted in improved retrieval precision.
          </p>
        </sec>
        <sec id="sec-3-1-3">
          <title>3.1.3. JSRT dataset</title>
          <p>
            JSRT is a digital database [
            <xref ref-type="bibr" rid="ref15">14</xref>
            ] consisting chest radiographs collected from 14 medical centers, with a
total of 247 images with and without lung nodules at high resolution (2048 x 2048 matrix size) with
a 12-bit grayscale. These chest x-ray radiographs grouped based on the lung nodules’ subtlety with
varying characteristics.
          </p>
          <p>The same pre-processing steps applied on the mentioned datasets, including resizing to a smaller
resolution of 256x256 pixels and normalization, to ensure uniformity in input data format, with a batch
size of 4 due to memory limitations. The datasets were divided into three distinct sets: training (60%),
testing (20%), and validation (20%).</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Evaluation Metrics</title>
        <p>We used a set of quantitative metrics to evaluate the Deep learning based model’s performance:</p>
        <sec id="sec-3-2-1">
          <title>3.2.1. Mean Squared Error (MSE)</title>
          <p>MSE used to measure the average of the squared diferences between the reconstructed and original
image; It is widely used in image processing, signal processing, and machine learning.</p>
          <p>MSE =</p>
          <p>1 ∑︁ ∑︁((, ) − (, ))2
 =1 =1</p>
        </sec>
        <sec id="sec-3-2-2">
          <title>3.2.2. Multi-Scale Structural Similarity Index (MS-SSIM)</title>
          <p>Multi-Scale Structural Similarity Index (MS-SSIM) used to measure the similarity between two images
based on luminance, contrast, and structure. Derived from SSIM to evaluate quality considering
structural information.</p>
          <p>MS-SSIM(, ) = [(, )] ∏︁ [ (, )]  [ (, )] 
=1
(1)
(2)
(3)</p>
        </sec>
        <sec id="sec-3-2-3">
          <title>3.2.3. Peak Signal-to-Noise Ratio (PSNR)</title>
          <p>PSNR is a measure of the diference of the quality for the reconstructed image compared to the original
image derived from MSE and expressed in decibels (dB). It is widely used to evaluate the performance
of compression methods. It represents the ratio between power of corrupting noise and the maximum
possible power of a signal.</p>
          <p>PSNR = 20 · log10(MAX) − 10 · log10(MSE)
3.3. Results and Discussion</p>
        </sec>
        <sec id="sec-3-2-4">
          <title>3.3.1. Performance on MXID Dataset</title>
          <p>
            - deep convolutional autoencoder (DCAE): For deep convolutional autoencoder (DCAE) architecture,
the results has show an outstanding level of performance in X-ray medical image reconstruction. After
51 epochs as depicted in Fig. 2 with a loss of 0.002 and using the PRELU activation function, for
ensuring the trade-of between diagnostic information overall quality and compression eficiency, as
demonstrated in Figure 11. Also, a PSNR value of 46.78 dB with a minimal loss of information, with an
MS-SSIM value of 0.99 suggesting an elevated similarity between original and reconstructed image.
- Autoencoder (AE): For the application of the autoencoder on MXID dataset , results have shown a
PSNR of 37.61 dB, MSE of 0.014, and an MS-SSIM of 0.61 in [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ], as depicted in Fig. 5, representing
a certain loss of details in the image quality , which is important for medical imaging applications.
While the overall structural integrity of the images has been preserved proposing an enhancement in
the auto-encoder model’s architecture for more accurate metrics values and a better image quality, as
illustrated in Figure 11.
- Convolutional Neural Network (CNN): CNN’s results indicates a certain loss after 21 epochs on MXID
dataset, with an MSE of 0.006, PSNR of 41.43 dB, and an MS-SSIM of 0.77 [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ] showing a better results
than autoencoder’s especially in terms of PSNR. deeper architecture may reflects better preservation of
ifne details which is crucial for medical imaging diagnosis, especially in the regions of interest.
          </p>
        </sec>
        <sec id="sec-3-2-5">
          <title>3.3.2. Performance on OPEN-I Dataset</title>
          <p>- deep convolutional autoencoder (DCAE): when applying the DCAE on OPEN-I dataset, results
outperforms all the other techniques with a loss of 0.0001 which is relatively lower than the application
of the same DCAE architechture on MXID and JSRT datasets. Furthermore, a PSNR of 47.14 dB
indicates a higher visual quality and a superior preservation of the diferent anatomical regions with an
MS-SSIM value of 0.99 presenting a very high similarity between the input image and reconstructed
one, as demonstrated in Figure 12.
- Autoencoder (AE): For comparing the autoencoders’ results on OPENI dataset with other datasets,
after 21 epochs autoencoder model achieved a PSNR of 39.07 dB, an MSE of 0.011, and an MS-SSIM of
0.70 which is the highest performance compared to the results on MXID dataset and JSRT dataset;
indicating a better compression quality and efective generalizability on larger dataset.</p>
          <p>- Convolutional Neural Network (CNN): Similarly, the application of the CNN on the OPENI dataset
has achieved an MSE of 0.004, PSNR of 42.40 dB, and an MS-SSIM of 0.80 which indicates the better
performance compared to MXID and JSRT results as described in Table 1.
3.3.3. Performance on JSRT Dataset</p>
          <p>- Deep Convolutional AutoEncoder (DCAE): the DCAE’s performance on the JSRT dataset,
characterized by a PSNR of 45.37 dB, an SSIM of 0.97, and an MSE of 0.001 after 21 epochs as illustrated
in Figure 7, indicates a slight reduction in image quality compared to the MXID and Open-I datasets
due to the JSRT imaging conditions and its potentially smaller dataset size, Figure 13 presents the visual
quality results for more clarity. Nevertheless, the subtle diferences in SSIM and compression metrics
values imply that dataset factors, including size, variability, and image quality, impact the model’s
performance.
- Autoencoder (AE): Moving to autoencoder’s performance on JSRT dataset, and after 18 epochs the
model acheived a slightly superior PSNR of 37.84 dB compared to MXID, an MSE of 0.014 which is
similar to autoencoder on MXID results, and MS-SSIM of 0.77 which represents the highest value
compared to the two other datasets, reflecting that the original and reconstructed images are more
similar.</p>
          <p>- Convolutional Neural Network (CNN): Model achieved an MSE of 0.016, PSNR of 36.96 dB, and an
MS-SSIM of 0.71 after 35 epochs as mentioned in table 1, which represents the lowest performance
of the CNN application in comparison to MXID, and OPEN-I datasets leading to under-performance
issues. Therefore, smaller datasets may not be suitable for CNN models which requires to learn complex
patterns.</p>
          <p>Ultimately, the smaller JSRT dataset’s size and variability in image quality, such as noisier images or
lower quality afected the model’s ability to generalize, also struggling to maintain crucial fine details
for an accurate diagnosis in the reconstructed images. All in all, smaller datasets may increase the
overfitting risk, especially for models with high parameters, such as deep convolutional architecture.
Additionally, AEs capture less complex characteristics due to limited data diversity, CNNs outperform
AEs in terms of feature extraction, while it need a larger dataset to generalize better. Finally, the DCAE
model extracts features combining the advantages of AE, and CNN’s models; however, with the JSRT
dataset, the DCAE model captures the features better than AEs but may overfit.</p>
        </sec>
        <sec id="sec-3-2-6">
          <title>3.3.4. Discussion</title>
          <p>The performance of the diferent techniques across the three diferent X-ray datasets is particularly
higher on larger and more diverse datasets including MXID, and OPEN-I highlighting the models
potential for diferent clinical application. However, subtle variations in reconstructed image quality
between datasets suggest the models sensitivity to specific dataset characteristics. This includes factors
along with size, image diversity, and inherent variations within the data. These findings enables the
models to learn more features on larger datasets.</p>
          <p>Furthermore, for MXID dataset PSNR and MS-SSIM values indicates a high quality in reconstructed
images across the three models (AE, CNN, DCAE), preserving fine structural details all over the 18 body
parts by maintaining and capturing structural information-based bone nuances, nerve structures, joint
spaces, skeletal structures, vertebral structures, intervertebral discs, organs, tissues, and cardiovascular
details, which reflects a balance between image quality preservation and compression for multi-body
parts imaging datasets, Figure 11, 12, 13 present a side-by-side visual comparison of the results.</p>
          <p>Moreover, OPEN-I chest x-ray dataset presents a slightly higher PSNR values across the three deep
learning models, due to the dataset’s larger size that ofered more comprehensive training data enabling
the models to capture more features related to chest anatomy. In addition, larger datasets reduce the
overfitting’s risk, and improve the model’s reconstruction quality.</p>
          <p>For the third dataset, a smaller JSRT chest X-ray dataset, indicates a slightly lower PSNR values
than OPEN-I and MXID datasets across the three deep learning models (AE, CNN, DCAE) due to the
smaller dataset’s size that limits the models ability to capture more critical diagnostic characteristics.
Additionally, the image quality variability in this dataset may lead to a decreased PSNR and MS-SSIM
values. Also, the blurriness produced in the reconstructed images could mask early signs of diseases
such as pneumonia or fibrosis in medical, with an existing loss of contrast leading to a subtle diferences
in tissue density that are important for diagnosis.</p>
          <p>
            Traditional methods, such as JPEG and JPEG2000 are widely used for medical imaging compression
due to their computational eficiency, therefore, the JPEG method may achieve higher compression
ratio values due to the noticeable loss of the crucial details, which is crucial for the diagnosis process
by specialists. On the other hand, JPEG2000 may preserve better quality at high compression levels,
it also outperforms JPEG in terms of image quality preservation [
            <xref ref-type="bibr" rid="ref16">15</xref>
            ], while balancing image fidelity
and storage needs, yet it requires more processing power, and still struggling in complex features and
details caption compared to deep learning techniques that ofer potential preservation while achieving
high compression ratios.
          </p>
          <p>Furthermore, Applying deep learning techniques to the MXID and other datasets presents several
challenges. The MXID dataset exhibits several body parts with varied characteristics, luminosity, and
contrast levels; However, while the autoencoder reduces the dimensions and noise, it fails to capture
complex features for reconstructed medical images, which limits the capture of details in the diferent
anatomical regions.</p>
          <p>On the other hand, although CNNs give better results in terms of complex feature extraction, they
need larger datasets to avoid overfitting and give better reconstructed images, Figure 13 demonstrate the
impact of the CNN model on the JSRT dataset images appears highly blurred and lacks clear anatomical
details, indicating that the model struggles to capture fine structures. The DCAE, model still achieves
high-quality compression with impressive results exceptionally on the OPEN-I dataset in which the
reconstruction error is minimized and the fine details are well preserved for a better accurate diagnosis;
Nevertheless, regarding its powerful encoding eficiency in maintaining structural information details
through convolutional layers, and its adaptability to noise which is essential in medical imaging, DCAEs
are sensitive to training data quality, once it contains artifacts or noise the model maintain these
characteristics too, which afects the reconstructed image quality, also, the training process for large
datasets with high-resolution can be time-consuming.</p>
          <p>These findings highlight the potential of the diferent techniques across diferent complex and larger
medical imaging datasets while presenting limitations with smaller dataset characteristics such as noise
levels, especially in the JSRT dataset.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>This research examines the DCAE, AE, and CNN models’ generalizability and robustness in compressing
diverse X-ray datasets. The models demonstrated high compression PSNR values and maintained image
quality, as measured by SSIM and MSE, on the MXID and OPEN-I datasets, presenting outstanding
results on a larger dataset with diferent characteristics with a wider range of images, and successfully
compressing images while preserving their high quality.</p>
      <p>Moreover, its performance was not as evident on a lower-quality, varying noise quality, and smaller
datasets as mentioned in the JSRT dataset which has fewer features extraction’s opportunities, this
may indicate certain limitations in processing images with decreased clarity or detail, and decreasing
its generalizability. Future work will entail the integration of the diferent optimization strategies to
enhance generalizability , such as transfer learning techniques pretrained on larger datasets, then
finetuned on smaller datasets. Also, data augmentation strategies, including scaling, rotation, could increase
dataset variability to generalize better diferent image conditions , and improve models performance.</p>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.
(a)</p>
      <p>
        (a) (b)
Figure 5: MXID AE Results: MSE, MS-SSIM, PSNR [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]
(a) (b)
Figure 9: OPEN-I CNN Results: MSE, MS-SSIM, PSNR
Original Image
CNN
DCAE
Original Image
CNN
DCAE
Original Image
CNN
CNN
DCAE
DCAE
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fettah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Menassel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gattal</surname>
          </string-name>
          ,
          <article-title>Machine learning for medical image analysis: A survey</article-title>
          ,
          <source>in: International Conference on Advanced Intelligent Systems for Sustainable Development</source>
          , Springer,
          <year>2022</year>
          , pp.
          <fpage>148</fpage>
          -
          <lpage>164</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Y.</given-names>
            <surname>LeCun</surname>
          </string-name>
          , Y. Bengio, G. Hinton,
          <article-title>Deep learning</article-title>
          , nature
          <volume>521</volume>
          (
          <year>2015</year>
          )
          <fpage>436</fpage>
          -
          <lpage>444</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <article-title>Autoencoder and its various variants</article-title>
          ,
          <source>in: 2018 IEEE international conference on systems, man, and cybernetics (SMC)</source>
          , IEEE,
          <year>2018</year>
          , pp.
          <fpage>415</fpage>
          -
          <lpage>419</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Raut</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Tiwari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Pande</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Thakar</surname>
          </string-name>
          ,
          <article-title>Image compression using convolutional autoencoder</article-title>
          ,
          <source>in: ICDSMLA 2019</source>
          , Springer,
          <year>2020</year>
          , pp.
          <fpage>221</fpage>
          -
          <lpage>230</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>G.</given-names>
            <surname>Guerrisi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Del Frate</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Schiavon, Convolutional autoencoder algorithm for on-board image compression</article-title>
          ,
          <source>in: IGARSS</source>
          <year>2022</year>
          -2022
          <string-name>
            <given-names>IEEE</given-names>
            <surname>International</surname>
          </string-name>
          <article-title>Geoscience and Remote Sensing Symposium</article-title>
          , IEEE,
          <year>2022</year>
          , pp.
          <fpage>151</fpage>
          -
          <lpage>154</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Cheng</surname>
          </string-name>
          , H. Sun,
          <string-name>
            <given-names>M.</given-names>
            <surname>Takeuchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Katto</surname>
          </string-name>
          ,
          <article-title>Deep convolutional autoencoder-based lossy image compression</article-title>
          ,
          <source>in: 2018 Picture Coding Symposium (PCS)</source>
          , IEEE,
          <year>2018</year>
          , pp.
          <fpage>253</fpage>
          -
          <lpage>257</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fettah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Menassel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gattal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gattal</surname>
          </string-name>
          ,
          <article-title>Convolutional autoencoder-based medical image compression using a novel annotated medical x-ray imaging dataset</article-title>
          ,
          <source>Biomedical Signal Processing and Control</source>
          <volume>94</volume>
          (
          <year>2024</year>
          )
          <fpage>106238</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D.</given-names>
            <surname>Mishra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. K.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <article-title>Lossy medical image compression using residual learning-based dual autoencoder model, in: 2020 IEEE 7th Uttar Pradesh section international conference on electrical, electronics and computer engineering (UPCON)</article-title>
          , IEEE,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Juliet</surname>
          </string-name>
          , et al.,
          <article-title>Deep medical image reconstruction with autoencoders using deep boltzmann machine training</article-title>
          ,
          <source>EAI Endorsed Transactions on Pervasive Health and Technology</source>
          <volume>6</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Cheng</surname>
          </string-name>
          , H. Sun,
          <string-name>
            <given-names>M.</given-names>
            <surname>Takeuchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Katto</surname>
          </string-name>
          ,
          <article-title>Performance comparison of convolutional autoencoders, generative adversarial networks and super-resolution for image compression</article-title>
          ,
          <source>in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>2613</fpage>
          -
          <lpage>2616</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Venugopal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Palanisamy</surname>
          </string-name>
          ,
          <article-title>Wavelet based convolutional autoencoder for medical image compression</article-title>
          ,
          <source>in: 2023 International Conference on Intelligent Systems for Communication, IoT and Security (ICISCoIS)</source>
          , IEEE,
          <year>2023</year>
          , pp.
          <fpage>83</fpage>
          -
          <lpage>88</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <article-title>Figure 4: JSRT DCAE Results: MSE, MS-SSIM</article-title>
          , PSNR
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Shukla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Srivastava</surname>
          </string-name>
          ,
          <article-title>Medical images compression using convolutional neural network with lwt</article-title>
          ,
          <source>International Journal of Modern Communication Technologies and Research</source>
          <volume>6</volume>
          (
          <year>2018</year>
          )
          <fpage>265086</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>D.</given-names>
            <surname>Demner-Fushman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. D.</given-names>
            <surname>Kohli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. B.</given-names>
            <surname>Rosenman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. E.</given-names>
            <surname>Shooshan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Rodriguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Antani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. R.</given-names>
            <surname>Thoma</surname>
          </string-name>
          ,
          <string-name>
            <surname>C. J. McDonald</surname>
          </string-name>
          ,
          <article-title>Preparing a collection of radiology examinations for distribution and retrieval</article-title>
          ,
          <source>Journal of the American Medical Informatics Association</source>
          <volume>23</volume>
          (
          <year>2016</year>
          )
          <fpage>304</fpage>
          -
          <lpage>310</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>J.</given-names>
            <surname>Shiraishi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Katsuragawa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ikezoe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Matsumoto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kobayashi</surname>
          </string-name>
          , K.-i. Komatsu,
          <string-name>
            <given-names>M.</given-names>
            <surname>Matsui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Fujita</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kodera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Doi</surname>
          </string-name>
          ,
          <article-title>Development of a digital image database for chest radiographs with and without a lung nodule: receiver operating characteristic analysis of radiologists' detection of pulmonary nodules</article-title>
          ,
          <source>American journal of roentgenology 174</source>
          (
          <year>2000</year>
          )
          <fpage>71</fpage>
          -
          <lpage>74</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Y.-H.</given-names>
            <surname>Shiao</surname>
          </string-name>
          , T.-J. Chen,
          <string-name>
            <given-names>K.-S.</given-names>
            <surname>Chuang</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.-H. Lin</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.-C. Chuang</surname>
          </string-name>
          ,
          <article-title>Quality of compressed medical images</article-title>
          ,
          <source>Journal of Digital Imaging</source>
          <volume>20</volume>
          (
          <year>2007</year>
          )
          <fpage>149</fpage>
          -
          <lpage>159</lpage>
          .
          <article-title>Original Image AE Figure 11: CNN vs</article-title>
          .
          <article-title>AE vs DCAE. results on MXID Original Image AE Figure 12: CNN vs</article-title>
          .
          <article-title>AE vs DCAE. results on OPEN-I Original Image AE Figure 13: CNN vs</article-title>
          .
          <source>AE vs DCAE. results on JSRT</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>