<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Computer Vision Winter Workshop, Robert Sablatnig and Florian
Kleber (eds.), Krems, Lower Austria, Austria, Feb.</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Impact of Learned Domain Specific Compression on Satellite Image Object Classification</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alexander Bayerl</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Manuel Keglevic</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Matthias Wödlinger</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Robert Sablatnig</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Computer Vision Lab, TU Wien</institution>
          ,
          <addr-line>Favoritenstraße 9/193-1, Vienna</addr-line>
          ,
          <country country="AT">Austria</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>1</volume>
      <fpage>5</fpage>
      <lpage>17</lpage>
      <abstract>
        <p>This paper proposes a methodology for learned compression for satellite imagery. The proposed method utilizes an image patching and stitching approach to address the high resolution of satellite images. We present rate-distortion metrics showing that this methodology outperforms JPEG2000, currently used on satellites. In addition, we demonstrate that using satellite images to train the compression model leads to superior performance compared to using non-domain-specific data. Furthermore, a detailed evaluation of the compression algorithm in a downstream classification task is conducted. The results demonstrate that 77.83% classification accuracy is still achievable for highly compressed images with a bitrate of 0.02 BPPs when the classification model is trained on images from the same compression model. The downstream classification task evaluation highlights that the performance of the classification model is highly dependent on the type of compression applied to the training data. When trained with learned compression images, the model can only classify images with an acceptable level of accuracy (&gt;77%) if they had also undergone learned compression. Likewise, a model trained with JPEG images can only classify JPEG images with acceptable accuracy (&gt;89%).</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Learned Image Compression</kwd>
        <kwd>Satellite Imagery</kwd>
        <kwd>Remote Sensing</kwd>
        <kwd>Image Classification</kwd>
        <kwd>Machine Learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        in learned compression can be achieved by limiting the
training data to images from this domain. For example,
As remote sensing technology develops, satellites take Tsai et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] show that using domain-specific training
photos with increasing spatial, temporal, and spectral res- data can significantly enhance the compression
perforolution. This leads to an increasing amount of produced mance of video game images. Similarly, Wödlinger et
data per day, which is a challenge for data storage [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] demonstrate superior performance in stereo image
In addition to data storage, transferring satellite images compression compared to other approaches by designing
from satellites to terrestrial nodes is a bottleneck in this a custom-built architecture and training it using
domainprocess as well. Compression algorithms specialized for specific data.
the satellite image domain have been developed to allevi- For satellite images following dificulty must be taken
ate this problem [
        <xref ref-type="bibr" rid="ref2 ref3 ref4 ref5">2, 3, 4, 5</xref>
        ]. into account: currently, 27 satellites with a spatial
reso
      </p>
      <p>
        Since image compression is a ubiquitous and funda- lution of less than 10 m per pixel are active, 19 of which
mental operation, it is a well-studied topic. Improve- have been launched in the last 20 years [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. This results
ments in image compression enable faster image data in increasing file sizes per satellite image [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] which
transfer and reduced storage costs. The invention of the has to be considered when processing such images on
discrete cosine transformation in 1972 by Nasir Ahmed neural network hardware accelerators. Even though a
et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] led to the definition of the JPEG-Format in 1992, simple method for handling this is dividing the image
which is still dominant. Ballé et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] showed in 2016 into processable patches and compressing each patch
inthat using compression models trained by artificial neural dependently, this leads to stitching artifacts on the border
networks can outperform traditional image compression between two patches in the decompressed image.
algorithms like JPEG-Discrete Cosine Transformation in This work examines learned image compression in the
terms of image quality and bitrate. context of satellite photography:
      </p>
      <p>For a specific image domain, further enhancements
1,289 imgs
train-compress
1,551 imgs
378 imgs
val-compress
(uncompressed)</p>
      <p>Train</p>
      <sec id="sec-1-1">
        <title>Eval</title>
        <p>n
o
o
C</p>
      </sec>
      <sec id="sec-1-2">
        <title>Compress</title>
        <p>train-class
val-class
val-compress
(compressed)</p>
      </sec>
      <sec id="sec-1-3">
        <title>Train Eval</title>
        <p>n
o
iitac led
fss om
i
a
l
C
tortion metric and the classification downstream
task.
• We show that even with compression ratios as low
as 0.02 BPP, a classification accuracy of 77.83%
can be achieved as long as domain-specific data
is utilized for training.</p>
        <p>
          Recently, image compression models based on artificial
neural networks have outperformed traditional
compression methods in terms of rate and distortion. Jamil et
al. [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] provide a survey on that subject. According to
the findings of this survey, autoencoders are the most
common learning-driven lossy image compression
archi2. State of the art tectures. These models utilize an encoder to transform
image data into a low-dimensional latent space. A
deLossy image compression is the process of reducing the coder is then employed to reconstruct the original image
size of digital image data without sacrificing its overall from this encoding. The seminal work of this approach is
quality. This difers from lossless image compression, from Balle et al. [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. They learn a probability distribution
which does not permit any information loss during the of the latent space jointly with the encoder and decoder
compression process. networks trained to reconstruct the original image.
Subsequent works employ hyperpriors and auto-regressive
2.1. Traditional Image Compression context models to decorrelate the spatial information in
the latent space [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ].
        </p>
        <p>
          A.J. Hussain et al. [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] conducted an exhaustive survey Similarly, Toderici et al. [18] show that Recurrent
Neuon the subject of lossy image compression. The authors ral Network (RNN) architectures can be used for learned
separate the compression approaches into predictive cod- image compression. Their model leverages feedback
ing, transform coding, vector quantization, and neural loops to iteratively compress an image to the desired
network approaches. bit rate.
        </p>
        <p>
          JPEG, the most popular lossy image codec, is based Furthermore, Generative Adversarial Networks
on transform coding, which uses the Discrete Cosine (GANs) have also been used in image compression.
Transformation to convert an image from pixel-space to According to Jamil et al. [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], GAN compression
frequency-space [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. The method utilizes the fact that outperforms traditional image compression algorithms
the human visual system is less susceptible to variations in terms of visual quality, albeit with the disadvantage of
in high-frequency components. By applying wavelet higher deployment costs.
transformations on the image, JPEG2000 improves on
that to achieve better rate-distortion metrics [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. 2.3. Satellite image compression
        </p>
        <p>
          More recently, Fabrice Bellard developed the BPG
format (Better Portable Graphics) that outperforms JPEG
and JPEG2000 in terms of rate and distortion [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. This
format relies on the intraframe encoding of HEVC [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
        </p>
        <p>
          Indradjad et al. [19] compare four diferent approaches
for satellite image compression with transform codings: a
wavelet approach by Delaunay et al. [20], bandelets [21],
JPEG 2000 [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], and a discrete wavelet transformation
method by the CCSDS (Consultative Committee for Space
Data Systems) [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. Of these approaches, JPEG 2000 yields the actual marginal distribution (), where  denotes
the highest peak signal-to-noise ratio (PSNR) as well
the latent encoding. Similarly, the rate of the hyperprior
as the second shortest compression and decompression
 is calculated which leads to the following definition
for the rate-loss :
high memory and complexity constraints in this domain. imagery is that image samples have resolutions of up
times.
        </p>
        <p>
          More recently, de Oliviera et al. [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] investigated neural
networks for the compression of satellite images. An
autoencoder with learned hyper-prior is utilized to learn
compression models for satellite imagery. The proposed
method outperforms the CCSDS wavelet compression [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]
currently used on French satellites in terms of rate and
distortion.
        </p>
        <p>
          Bacchus et al. [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] investigate the use of learned
methods for onboard satellite image compression, to address
The authors also employ a hyperprior-based architecture
and incorporate data augmentations as a preprocessing
step. Their method performs better than JPEG2000, and
the authors concluded that its relatively low inference
time makes it well-suited for use on satellites.
3.
        </p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>Methodology</title>
      <p>This section provides an overview of the methodology
proposed in this work. It begins with a brief
introduction to learned image compression, followed by an
explanation of how the technique is adapted to suit
highresolution satellite images.</p>
      <sec id="sec-2-1">
        <title>3.1. Learned Image Compression</title>
        <p>
          This work is based on the compression model by Balle et
al. [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. Figure 2 shows an overview of the architecture.
tion of the quantized latent ^ is modeled using a learned
hyperprior ℎ and a context model  that predicts the
parameters of a Gaussian distribution  ( ,  ). The
autoregressive component utilizes already decoded pixels
for decoding further pixels. This yields superior
ratedistortion results, with the disadvantage that decoding
has to be done iteratively and not in parallel.
        </p>
        <p>We directly train the model with the trade-of between
 = E∼  − log2 ^(^)]︀ + E∼  − log2 ^(^)]︀
︀[
︀[
⏟
rate (lat⏞ents)
⏟
rate (hyper-latents)
⏞</p>
      </sec>
      <sec id="sec-2-2">
        <title>3.2. Stitching</title>
        <p>As discussed in the introduction, a limitation of satellite
to 14798 ×</p>
        <p>14802 pixels, which causes issues for the
training and inference on neural network hardware
accelerators such as GPUs. Since dividing the input into
patches and processing the patches independently of
each other leads to visible artifacts on the borders
between the patches in the stitched images, our approach
resolves this issue by compressing overlapping patches.</p>
        <p>
          For the stitched image, the average value of both patches
(or four patches in corners) is used for the overlapping
regions. Figure 3 illustrates the overlapping regions of a
(3)
The dataset used in this work is the Functional Map
of the world (fMoW). It was created at the John
Hopkins University Applied Physics Laboratory in Laurel,
Maryland (United States) and is publicly available at
https://github.com/fMoW/dataset [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. This dataset was
compiled to facilitate research in computer vision for
The model has an autoencoder structure, and the distribu- be seen. As a result, the boundaries of each patch are less
sion rate :
the distortion  of the original image and the compres- results of the proposed compression algorithm on the
visible in the blended image on the right.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Evaluation</title>
      <p>This section provides an overview of the evaluation
process and presents the results of this work.</p>
      <p>Firstly, the utilized data set is described in detail, and
how it was employed in this work. Subsequently, the
data set are highlighted and discussed. Finally, the results
of the downstream classification task on the compressed
 =  +  ·</p>
      <p>Here λ controls the trade-of between rate and
distortion. For the distortion  the Mean Squared Error
(MSE) is used, which computes the averaged pixel-wise
quadratic diference between original image and distorted
image:</p>
      <p>= E∼  || − ˆ||22</p>
      <p>The compression rate  is estimated by the
crossentropy between the entropy model distribution ^ and
(1) images are presented.</p>
      <sec id="sec-3-1">
        <title>4.1. Dataset</title>
        <p>(2)
yâ</p>
        <p>AE
Bits</p>
        <p>AD</p>
        <p>Context
Model</p>
        <p>Φ</p>
        <p>Entropy
Parameters</p>
        <p>N(μ, θ)
Ψ</p>
        <p>Q âz
âz</p>
        <p>AE</p>
        <p>The partitioning of the data set used in this work is
shown in Figure 1. For compression training, 1,289
images from the fMoW train set, uniformly distributed over
all 63 categories, are used (train-compress). These 1,289
images are from 1,038 objects. As such, for some objects,
there are multiple images taken under diferent
environmental conditions.</p>
        <p>Another set, denoted as the (val-compress), consists of
1,929 images from 1,038 objects from the fMoW
validation set. The val-compress serves two purposes: one is
evaluating the compression, and another is evaluating
the downstream classification task. For the latter, the
valcompress is split again into 1,551 images for classification
training (train-class) and 378 images for classification
validation (val-class).</p>
      </sec>
      <sec id="sec-3-2">
        <title>4.2. Compression Evaluation</title>
        <p>
          With the parameter λ in Equation 1 the trade-of between
Figure 3: 1496 × 1496 image divided into 36 patches (256 × rate and distortion can be controlled, i.e. increasing the
256 pixels) with an overlapping region of 8 × 256 pixels be- λ leads to a smaller MSE but therefore more BPP. To
tween two patches. evaluate our model for diferent bitrates, we train the
model with diferent values for the parameter lambda. In
Figure 5 the compression results with bitrates ranging
remote sensing applications. It includes over 1 million from 0.003 BPP to 0.68 BPP are shown for an example
images of objects taken from satellites, categorized into image. The BPP of the compressed image is calculated
63 categories, such as airports, tunnel openings, zoos, and directly by dividing the file size of the encoded image by
towers. Christie et al. [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] highlight the importance of ob- the amount of pixels in the respective image.
taining a geographically distributed data set to minimize The Peak-signal-to-noise-ratio (PSNR) metric is used
geographical bias. to evaluate the distortion. The distortion is calculated
        </p>
        <p>Overall the dataset contains about 628,000 training using the MSE between the compressed and the
correimages and about 100,730 images for validation. The pho- sponding uncompressed images. The PSNR is defined
tographs are provided as compressed JPEG- and lossless as: ︂( 255 )︂
TIFF-color images. Each object has been photographed PSNR = 10 · log10 MSE (4)
in a variety of environmental settings (weather, time,
season). Since this work explicitly focuses on high- As depicted on the Rate-Distortion-Curve in Figure 6,
resolution satellite images, only images with a resolution the results indicate that the proposed learned
compresof at least 1024 × 1024 pixels are considered. sion methodology outperforms JPEG and is also superior
without Blending
with Blending
to the JPEG2000 compression format, which is frequently
used in satellite applications.</p>
        <p>To verify that training the compression model with
domain-specific satellite images improves the
downstream classification task, another compression model Each of these classification models has been used to
was trained on 1,749 non-domain-specific samples from validate data sets in diferent compression scenarios:
the ImageNet data-set. JPEG data sets (0.31 BPP, 0.76 BPP, 1.55 BPP),
Learned</p>
        <p>The results in Table 1 show that the domain-specific compression-compressed (LC) datasets (0.02 BPP,
compression model trained with satellite images outper- 0.67 BPP, 1.07 BPP), dataset retrieved from
ImageNetforms the model trained with ImageNet samples. For a trained learned Compression (0.84 BPP) and one without
PSNR of 33 it yields a lower bit rate of 0.67 BPP compared compression. The result for this classification validations
to 0.84 BPP achieved by the domain-agnostic model. are shown in Table 2. The columns denote the data set
the classifier was trained on, the rows denote the data
5. Classification Evaluation set that was classified during validation.
The results show that a classification model works best
In addition to the evaluation in terms of image quality, when classifying images that were compressed with the
in this section, compressed quality is represented by the same algorithm (JPEG or learned compression) as the
accuracy of a classification downstream task, i.e., iden- images on which it was trained, i.e., the JPEG-trained
tifying objects in satellite images. As mentioned in the classifiers classified JPEG images with accuracies over
Section 4.1, compressed and uncompressed versions of 89%. In contrast, the JPEG-trained classifier only achieves
the val-compress set are used, with 1,554 images used an accuracy of up to 35.21% on images compressed by
for training (train-class) the classification model, and 375 learned compression. Similarly, the accuracy of the
LCimages used to validate the model (val-class). trained classifiers was at least 77% when classifying LC</p>
        <p>A dual path network1 [22] is utilized for this evalua- images (except for very low bitrate of 0.02 BPP), and no
tion. For the evaluation of the classification downstream more than 39.38% when classifying JPEG-compressed
images.
(a) Original image</p>
        <p>24.00 BPP
(b) LC 0.003 BPP
(c) LC 0.12 BPP
(d) LC 0.35 BPP</p>
        <p>(e) LC 0.63 BPP
Validated
with:
LC 0.02 BPP
LC 0.67 BPP
LC 1.07 BPP
LC ImageNet 0.84 BPP
JPEG 0.31 BPP
JPEG 0.77 BPP
JPEG 1.55 BPP
Uncompressed
77.83%
25.57%
26.9%
14.94%
14.41%
16.59%
15.34%
15%
10.58%
15.67%
16.63%
78.92%
11.22%
11.85%
14.32%
12.97%
12.91%
32.89%
35.21%
11.22%
89.81%
91.55%
91.55%
91.29%
(f) JPEG 0.06 BPP
(g) JPEG 0.14 BPP
(h) JPEG 0.44 BPP
(i) JPEG 0.68 BPP</p>
        <p>The ImageNet-trained compression model demon- LC-compressed ones trained with satellite images. This
strates that the training data is also crucial for the down- suggests that traditional compression methods lead to a
stream classification task. It classified 78.92% of the more versatile encoding that is not as dependent on the
images created by the same compression model, while specific domain.
other datasets, even with high bitrate, could not attain
an accuracy greater than 16.63% for any other
evaluation. It fails to classify all other data sets, including the</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>6. Conclusion</title>
      <p>In this work, we propose a satellite compression
methodology that outperforms traditional methods (JPEG,
JPEG2000) in terms of rate and PSNR. We show that
images that exceed the memory of typical neural network
hardware accelerators can be compressed by feeding in
patch-wise parts of the image. To remove artifacts at
the connection line between two patches, the connection
region is smoothed by compressing overlapping patches
and combining the pixels in these regions. The proposed
methodology ofers superior performance compared to
JPEG, and JPEG2000, commonly used for satellite
imaging. We assess the efects of compression on the
performance of an object classification downstream task. We
demonstrate that a classification model can learn to
classify images with an accuracy of 77.83% even for images
compressed with a bitrate as low as 0.02 BPP.
Furthermore, we show that using diferently encoded images
for training and inference can deteriorate classification
accuracy significantly. As such, classification models
trained with JPEG images only achieve acceptable results
when tested on JPEG images. Similarly, classification
models trained with images compressed with learned
compression models fail when tested with JPEG images.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>This project has received funding from the European
Union’s Horizon 2020 research and innovation program
under grant agreement No 965502.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>H.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <article-title>Big earth data: A new challenge and opportunity for digital earth's development</article-title>
          ,
          <source>International Journal of Digital Earth</source>
          <volume>10</volume>
          (
          <year>2017</year>
          )
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P.-S.</given-names>
            <surname>Yeh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Armbruster</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kiely</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Masschelein</surname>
          </string-name>
          , G. Moury,
          <string-name>
            <given-names>C.</given-names>
            <surname>Schaefer</surname>
          </string-name>
          ,
          <string-name>
            <surname>C. Thiebaut,</surname>
          </string-name>
          <article-title>The new ccsds image compression recommendation</article-title>
          ,
          <year>2005</year>
          , pp.
          <fpage>4138</fpage>
          -
          <lpage>4145</lpage>
          . doi:
          <volume>10</volume>
          .1109/AERO.
          <year>2005</year>
          .
          <volume>1559719</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Bacchus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Fraisse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roumy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Guillemot</surname>
          </string-name>
          ,
          <article-title>Quasi lossless satellite image compression</article-title>
          ,
          <source>IGARSS 2022 - 2022 IEEE International Geoscience and Remote Sensing Symposium</source>
          (
          <year>2022</year>
          )
          <fpage>1532</fpage>
          -
          <lpage>1535</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>V.</given-names>
            <surname>A. de Oliveira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chabert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Oberlin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Poulliat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bruno</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Latry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Carlavan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Henrot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Falzon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Camarero</surname>
          </string-name>
          ,
          <article-title>Satellite image compression and denoising with neural networks</article-title>
          ,
          <source>IEEE Geoscience and Remote Sensing Letters</source>
          <volume>19</volume>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>F. E.</given-names>
            <surname>Hassan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. I.</given-names>
            <surname>Salama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Ibrahim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Bahy</surname>
          </string-name>
          ,
          <article-title>Investigation of on-board compression techniques for remote sensing satellite imagery</article-title>
          ,
          <source>in: International Conference on Aerospace Sciences and Aviation Technology</source>
          , volume
          <volume>11</volume>
          ,
          <source>The Military Technical College</source>
          ,
          <year>2011</year>
          , pp.
          <fpage>937</fpage>
          -
          <lpage>946</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>N.</given-names>
            <surname>Ahmed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Natarajan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Rao</surname>
          </string-name>
          ,
          <article-title>Discrete cosine transform</article-title>
          ,
          <source>IEEE Transactions on Computers C-23 cessing systems 31</source>
          (
          <year>2018</year>
          ).
          <article-title>(</article-title>
          <year>1974</year>
          )
          <fpage>90</fpage>
          -
          <lpage>93</lpage>
          . doi:
          <volume>10</volume>
          .1109/T-C.
          <year>1974</year>
          .
          <volume>223784</volume>
          . [18]
          <string-name>
            <given-names>G.</given-names>
            <surname>Toderici</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Vincent</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Johnston</surname>
          </string-name>
          , S. Jin Hwang,
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Ballé</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Laparra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. P.</given-names>
            <surname>Simoncelli</surname>
          </string-name>
          ,
          <string-name>
            <surname>End-</surname>
            to-end
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Minnen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Shor</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Covell</surname>
          </string-name>
          ,
          <article-title>Full resolution image optimized image compression, in: 5th Interna- compression with recurrent neural networks</article-title>
          ,
          <source>in: tional Conference on Learning Representations, Proceedings of the IEEE conference on Computer ICLR</source>
          <year>2017</year>
          ,
          <year>2017</year>
          . Vision and Pattern Recognition,
          <year>2017</year>
          , pp.
          <fpage>5306</fpage>
          -
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Y.-H.</given-names>
            <surname>Tsai</surname>
          </string-name>
          , M.-
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.-H. Yang</surname>
          </string-name>
          , 5314. J.
          <string-name>
            <surname>Kautz</surname>
            , Learning binary residual representations [19]
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Indradjad</surname>
            ,
            <given-names>A. S.</given-names>
          </string-name>
          <string-name>
            <surname>Nasution</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Gunawan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <article-title>Widifor domain-specific video streaming, Proceed- paminto, A comparison of satellite image compresings of the AAAI Conference on Artificial Intel- sion methods in the wavelet domain</article-title>
          ,
          <source>IOP Conferligence 32</source>
          (
          <year>2018</year>
          ). URL: https://ojs.aaai.org/index. ence Series: Earth and Environmental Science 280 php/AAAI/article/view/12259. doi:
          <volume>10</volume>
          .1609/aaai. (
          <year>2019</year>
          )
          <article-title>012031</article-title>
          . doi:
          <volume>10</volume>
          .1088/
          <fpage>1755</fpage>
          -1315/280/1/ v32i1.
          <fpage>12259</fpage>
          . 012031.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Wödlinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kotera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Sablatnig</surname>
          </string-name>
          ,
          <string-name>
            <surname>Sa</surname>
          </string-name>
          - [20]
          <string-name>
            <given-names>X.</given-names>
            <surname>Delaunay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chabert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Charvillat</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Morin, sic: Stereo image compression with latent shifts Satellite image compression by post-transforms and stereo attention, in: 2022 IEEE/CVF Con- in the wavelet domain</article-title>
          ,
          <source>Signal Processing 90 ference on Computer Vision</source>
          and Pattern Recog- (
          <year>2010</year>
          )
          <fpage>599</fpage>
          -
          <lpage>610</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.sigpro.
          <source>2009. nition (CVPR)</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>651</fpage>
          -
          <lpage>660</lpage>
          . doi:
          <volume>10</volume>
          .1109/ 07.024. CVPR52688.
          <year>2022</year>
          .
          <volume>00074</volume>
          . [21]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mallat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Peyré</surname>
          </string-name>
          , A Review of Bandlet Meth-
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <article-title>Surface water infor- ods for Geometrical Image Representation, Numation extraction based on high-resolution im- merical Algorithms 44 (</article-title>
          <year>2007</year>
          )
          <fpage>205</fpage>
          -
          <lpage>234</lpage>
          . URL: https: age, IOP Conference Series: Earth and Environ- //hal.archives-ouvertes.
          <source>fr/hal-00359744. mental Science</source>
          <volume>330</volume>
          (
          <year>2019</year>
          )
          <article-title>032013</article-title>
          . URL: https:// [22]
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Yuppen</surname>
          </string-name>
          <string-name>
            <surname>Chen</surname>
          </string-name>
          ,
          <source>International Journal of Comdx.doi.org/10</source>
          .1088/
          <fpage>1755</fpage>
          -1315/330/3/032013. doi:10.
          <string-name>
            <surname>puter Applications</surname>
          </string-name>
          (
          <year>2017</year>
          ).
          <volume>1088</volume>
          /
          <fpage>1755</fpage>
          -1315/330/3/032013. [23]
          <string-name>
            <given-names>J.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Socher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.-J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <surname>L</surname>
          </string-name>
          . Fei-
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , Fei,
          <string-name>
            <surname>Imagenet:</surname>
          </string-name>
          <article-title>A large-scale hierarchical image P. Gong, An overview of the applications of database, in: 2009 IEEE conference on computer earth observation satellite data: Impacts and fu- vision and pattern recognition</article-title>
          , Ieee,
          <year>2009</year>
          , pp.
          <fpage>248</fpage>
          -
          <lpage>ture</lpage>
          trends,
          <source>Remote Sensing</source>
          <volume>14</volume>
          (
          <year>2022</year>
          )
          <year>1863</year>
          . doi:
          <volume>10</volume>
          .
          <fpage>255</fpage>
          . 3390/rs14081863.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>G.</given-names>
            <surname>Christie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Fendley</surname>
          </string-name>
          , J. Wilson,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mukherjee</surname>
          </string-name>
          ,
          <article-title>Functional map of the world</article-title>
          ,
          <source>in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>6172</fpage>
          -
          <lpage>6180</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Hussain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Al-Fayadh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Radi</surname>
          </string-name>
          ,
          <article-title>Image compression techniques: A survey in lossless and lossy algorithms</article-title>
          ,
          <source>Neurocomputing</source>
          <volume>300</volume>
          (
          <year>2018</year>
          )
          <fpage>44</fpage>
          -
          <lpage>69</lpage>
          . doi:https://doi.org/10.1016/j.neucom.
          <year>2018</year>
          .
          <volume>02</volume>
          .094.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Skodras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Christopoulos</surname>
          </string-name>
          , T. Ebrahimi,
          <article-title>The jpeg 2000 still image compression standard</article-title>
          ,
          <source>IEEE Signal Processing Magazine</source>
          <volume>18</volume>
          (
          <year>2001</year>
          )
          <fpage>36</fpage>
          -
          <lpage>58</lpage>
          . doi:
          <volume>10</volume>
          . 1109/79.952804.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>G. J.</given-names>
            <surname>Sullivan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-R.</given-names>
            <surname>Ohm</surname>
          </string-name>
          , W.-J. Han,
          <string-name>
            <surname>T</surname>
          </string-name>
          . Wiegand,
          <article-title>Overview of the high eficiency video coding (hevc) standard</article-title>
          ,
          <source>IEEE Transactions on Circuits and Systems for Video Technology</source>
          <volume>22</volume>
          (
          <year>2012</year>
          )
          <fpage>1649</fpage>
          -
          <lpage>1668</lpage>
          . doi:
          <volume>10</volume>
          .1109/TCSVT.
          <year>2012</year>
          .
          <volume>2221191</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>S.</given-names>
            <surname>Jamil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Piran</surname>
          </string-name>
          ,
          <article-title>MuhibUrRahman, Learningdriven lossy image compression; a comprehensive survey</article-title>
          ,
          <year>2022</year>
          . URL: https://arxiv.org/abs/2201.09240. doi:
          <volume>10</volume>
          .48550/ARXIV.2201.09240.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>D.</given-names>
            <surname>Minnen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ballé</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. D.</given-names>
            <surname>Toderici</surname>
          </string-name>
          ,
          <article-title>Joint autoregressive and hierarchical priors for learned image compression</article-title>
          ,
          <source>Advances in neural information pro-</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>