<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Generalization for Planetary Imagery</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Clara Salditt</string-name>
          <email>clara.salditt@web.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Karan Molaverdikhani</string-name>
          <email>karan.Molaverdikhani@colorado.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Barbara Ercolano</string-name>
          <email>ercolano@usm.lmu.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Data Quality, Gen AI, Model Generalization in Planetary Imagery, Synthetic Images Quality</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Exzellenzcluster 'Origins'</institution>
          ,
          <addr-line>Boltzmannstr 2, D-85748 Garching</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Ludwig-Maximilians-Universität München (Munich University)</institution>
          ,
          <addr-line>Geschwister-Scholl-Platz 1, D-80539 München</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Universitäts-Sternwarte</institution>
          ,
          <addr-line>Scheinerstr 1, D-81679 München</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>AI is set to play a crucial role in the future of space missions, enabling autonomous rover navigation, landing procedures, and terrain analysis. For these systems to perform reliably, they must be trained on large volumes of high-quality, task-specific data. However, in space science, data is often limited due to the high costs and power demands of transmission, and more critically it is not fully controllable. Synthetic data ofers a promising solution by being both controllable and significantly more cost- and time-eficient. Yet, for synthetic data to genuinely enhance model performance, its quality must be rigorously evaluated. This work addresses that challenge by assessing the quality of synthetic data generated with StyleGAN2-ADA, trained on HiRISE imagery. An evaluation pipeline was developed to analyze the data using a range of established metrics. At the same time, it examines the reliability and relevance of these metrics themselves. The findings reveal a perceptual mismatch between model-based feature extractors and human judgment, raising concerns about the trustworthiness of current evaluation practices.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        AI applications in space science, such as surface classification models, autonomous navigation and
landing of rovers, and terrain analysis have become increasingly relevant in recent years [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1, 2, 3</xref>
        ]. These
systems not only enable real-time decision-making and increase the likelihood of mission success, but
also broaden the scope of space missions by reducing the need for human intervention in every decision.
However, as with any AI application, training such models requires large amounts of high-quality
data to ensure reliable performance. In space science and planetary exploration, data acquisition is
inherently limited, due to constraints such as costs, data security and the energy-intensive nature of
data transmission. Moreover, there is minimal control over image content in terms of angles, weather
conditions, or lighting.
      </p>
      <p>
        Synthetic data may help mitigate these challenges. It ofers a cost- and time-eficient method to
generate data that can be tailored to address gaps and biases in real-world datasets, making it a valuable
complement to fully real datasets. Still, the efectiveness of synthetic data hinges on its quality. Therefore,
evaluating the quality of the data is fundamental to building reliable, high-performing AI systems
and helps trace the root causes of arising problems or failures. With the rise of generative AI many
evaluation metrics were introduced, these metrics typically rely on features extracted by convolutional
neural networks (CNNs) or vision transformers (ViTs), assuming that such models generalize well to
novel domains. Yet recent studies have highlighted biases and a lack of robustness in these feature
extractors when applied to unfamiliar domains [
        <xref ref-type="bibr" rid="ref4 ref5 ref6">4, 5, 6</xref>
        ].
      </p>
      <p>Workshop on AI-driven Data Engineering and Reusability for Earth and Space Sciences (DARES’25), co-located with the 28th
LGOBE
(B. Ercolano)
https://github.com/ClaraSalditt/Metrics_on_Mars.git (C. Salditt)</p>
      <p>CEUR
Workshop</p>
      <p>ISSN1613-0073</p>
      <p>
        Planetary imagery is particularly relevant in this context, as widely used feature extractors like Inception,
CLIP, and DINO have been primarily trained on Earth-based, object-centric images, making planetary
data distinctly out-of-distribution [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. This work contributes to the understanding of this issue by
evaluating the robustness of such models and the metrics used to assess synthetic data. This work
proposes an evaluation pipeline that examines a range of metrics concurrently and present benchmark
results for StyleGAN2-ADA trained on HiRISE imagery. The findings suggest a fundamental perception
gap between the representations of backbone models and human judgment.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Dataset and Preprocessing</title>
      <p>
        Data from the HiRISE (High Resolution Imaging Science Experiment) catalog was used for this project
[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. HiRISE is onboard the Mars Reconnaissance Orbiter, which has been orbiting Mars since 2006
at an altitude of approximately 250 to 316 kilometers. It captures images with a resolution of around
0.3 meters per pixel [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. For training, the “JPEG IRB color no map” images were downloaded via web
scraping using BeautifulSoup [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. These images are not true-color RGB; instead, a min-max stretch is
applied to each color band to enhance visual contrast [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. After visual inspection, several common
image artifacts were identified, including corrupted upper borders and vertical black or blue columns.
Therefore, pre-processing began with the removal of the upper border, followed by the detection and
removal of black and blue columns. To detect blue columns, scripts were first tested on a small, manually
curated subset of images in order to fine-tune the hue range in the HSV color space. A hue range of [85,
95] (on OpenCV’s 0–179 scale) was found to be most efective. In addition to these specific artifacts,
some images exhibited more severe quality issues. A custom script was developed to detect extreme
hue values and flag such images for manual review. These flagged images were then sorted into “keep”
and “remove” sets based on visual inspection. After curation, the valid images were cropped into square
tiles of size 512 × 512 pixels. The final dataset used for training StyleGAN2-ADA comprised 173,603 tiles
generated from 24,609 original images. In contrast, the test dataset consisted of 50,000 tiles cropped
from 19,998 original images.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Experiments</title>
      <p>
        The metrics used in this analysis can be grouped into two categories. Distribution-based metrics
evaluate the entire datasets by extracting feature maps and treating them as samples from underlying
distributions. These metrics then compute some form of statistical distance or divergence between the
distributions. Pairwise image similarity metrics, on the other hand, operate at the level of individual
image pairs, comparing them either directly in pixel space or in an embedded feature space.
• Distribution-based metrics: FID [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], KID-poly [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], KID-rbf, CMMD [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], Precision and Recall
[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], PPL [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], ISC [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]
• Pairwise image similarity metrics: MS-SSIM (pixel-based) [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], PSNR (pixel-based), LPIPS-Alex
[
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], LPIPS-VGG, DreamSIM [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]
To ensure a consistent evaluation benchmark, a dataset of 50,000 generated images was created using a
modified script based on generate.py from the StyleGAN2-ADA repository. Truncation was set to
 = 1.0 , and the noise mode was fixed to const. For reproducibility, the dataset was generated using a
ifxed random seed.
      </p>
      <sec id="sec-3-1">
        <title>3.1. Distribution based metrics</title>
        <p>
          Table 1 shows the results of distribution-based metrics evaluated on the final StyleGAN2-ADA model.
KID-poly, KID-rbf, and ISC were calculated using InceptionV3 embeddings with the Pytorch- Fidelity
implementation [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. CMMD was computed with the PyTorch implementation from [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. FID was
FID (fid50k)
FID (fidelity)
KID (fidelity poly)
KID (fidelity rbf)
ISC (fidelity isc)
CMMD
PPL (pplzend)
PPL (pplwend)
PPL (pplzfull)
PPL (pplwfull)
Precision
Recall
        </p>
        <p>Score
evaluated both with Pytorch-Fidelity and NVIDIA’s original FID50k implementation, the latter using
image samples generated directly from the network rather than from a pre-generated dataset.</p>
        <p>Notably FID from Fidelity is consistently higher compared to the FID from StyleGAN-ADA’s
implementation. These diferences may be due to variations in the image generation procedure.
Although both implementations use the InceptionV3 network, Fidelity uses a PyTorch version, while
NVIDIA’s original FID50k implementation relies on TensorFlow.</p>
        <p>
          While no standardized benchmarks exist for synthetic planetary surface data, it is useful to contextualize
the reported values using results from more established datasets. For example, FID values for StyleGAN2
on well-known datasets typically range from 5.3 to 12.96 (see Table 1 in [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]). KID provides an even
more favorable comparison, the values reported in Figure 11 of [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ] range between 0.15 and 3.36,
depending on the training dataset. In contrast, Inception Scores between 8.55 and 10.02 are reported
in the same paper, which the models trained in this thesis do not reach. Regarding PPL in  -space,
values for StyleGAN2 have been shown to lie between 125 and 802 (see Table 2 in [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ]), which is
considerably worse that the performance of StyleGAN2-ADA in this thesis. In the CMMD benchmarks
reported by [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ], scores span roughly 0.55 to 1.14 for a variety of non–style-based generative models.
StyleGAN2-ADA, however, falls outside this range.
        </p>
        <p>(a) good scoring pair
(b) bad scoring pair</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Pairwise Image Similarity Metrics</title>
        <p>Pairwise Image Similarity Metrics were evaluated with three distinct matching schemes. First, each
generated image was paired with a randomly selected real image from the test set. As reference random
(a) MS-SSIM
(b) LPIPS-VGG
(c) DreamSIM
matching was also performed for real-to-real images pairs and generated-to-generated. Second, the
same k-nearest neighbor algorithm used for precision and recall was applied to match every generated
image to its closest real counterpart in the test dataset on the base of the image embedding extracted
from DINOv2-ViT-B/14, CLIP-ViT-B/32 and InceptionV3. Third, the nearest-neighbor matching was
repeated, but this time using the training set instead of the test set. For all three schemes, pairwise
Euclidean distances were computed in the feature spaces of DINOv2-ViT-B/14, CLIP-ViT-B/32 and
Inceptionv3 to find the nearest neighbor.</p>
        <p>The qualitative behavior of all metrics was the same. The average score improved when replacing
random paring with nearest-neighbor matching and even more, albeit modest, when nearest-neighbors
where drawn from train dataset. Interestingly, the choice of feature extractor has as much impact on the
mean score as switching from random to either of the nearest-neighbor pairings. This is highlighted by
an overlap analysis that examines the agreement between extractors: Fewer than 1 % of the generated
images are matched to a crop originating from the same real image, regardless of the extractor pair.
Among the three models, DINO and Inception agree the most, yet even in that case only 0.7 % of matches
point to the same original image.</p>
        <p>
          For both nearest-neighbor schemes in all metrics looked at, CLIP yields the lowest average similarity
score, followed by Inception and DINO changing position. A visual examination of the image matches
showed that the highest- scoring pairs look visually similar but often lack large-scale structure, whereas
mid-range pairs typically show broader structural elements ( see figure 1). Paradoxically, some
lowerscoring pairs do not appear perceptually less similar than certain higher-scoring ones, suggesting
that the metrics are disapproving complex scenes. To probe the observation that high scores are only
archived by pairs lacking structure, mean Shannon entropy was computed for each color channel (RBG)
of every image, averaged over the pair, and plotted against the corresponding metric score. Figure 2
shows the results plotted for (a) MS-SSIM as a pixel-based metric, (b) LPIPS-VGG as feature based metric,
and (c) DreamSIM which weights were trained to align with human perception [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]. For MS-SSIM a
clear bias is evident, pairs with low entropy tend to achieve the best MS-SSIM scores. This bias reduces
for LPIPS-VGG where both the best and worst results are achieved by high-Entropy pairs. The best
scoring pairs with high entropy show only very small repetitive structures no complex large scenes.
For the DreamSIM this trend vanishes. The corresponding Pearson matrix shown in Figure 3 confirms
a pronounced negative correlation  ≈ -0.75 for MS-SSIM and  ≈ -0.80 for PSNR, meaning that image
pairs with higher Shannon entropy are systematically assigned lower similarity scores. By contrast,
LPIPS (AlexNet and VGG backbones) exhibits only a weak positive correlation with entropy ( ≈ 0.30),
while DreamSim is virtually independent of entropy.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion and Outlook</title>
      <p>This work has demonstrated that the fidelity and diversity of the generated images are comparable
to benchmarks from popular datasets, even though the training dataset used here was significantly
smaller. However, direct comparisons across entirely diferent datasets are not necessarily reliable, as
metric scores can vary greatly due to inherent dataset characteristics rather than the actual quality of
generated images.</p>
      <p>Furthermore it was shown that merely looking at the numerical scores can mask underlying flaws in
the metrics themselves that arise on the particular dataset being looked at. For example, MS-SSIM and
PSNR showed a strong correlation with image entropy, while LPIPS displayed this efect to a lesser
extent. This indicates limitations in these metrics’ ability to recognize perceptual similarity, especially
in images with larger-scale structures. This bias could either be an inherit property of the metric or it
could be caused by flawed nearest neighbor matches and therefor be an problem in the backbone model.
This is plausible since the nearest neighbor matches created doubts if the embedding match human
judgment of similarity, pointing towards a lack of generalization in the feature underlying extraction
models. To really proof there a perceptually better matches, a comprehensive human judged study
would have to be carried out.</p>
      <p>
        Quality control for synthetic data generation therefore still poses a challenge. Fine-tuning these models
to align with human judgment could play an essential role in making all feature-based metrics more
reliable and interpretable. There are two primary approaches to achieving this alignment: either
ifne-tuning on a curated dataset specific to the context and application, or employing a more holistic
approach, such as that presented in [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ], which generally aims to match internal structure to human
cognition. However, fine-tuning on a specific dataset typically comes at the cost of generalization and
relies heavily on the availability of such curated data. This poses a significant challenge for planetary
science, as there are neither extensively labeled (besides datasets for crater detection) nor otherwise
curated datasets suitable for model training. Furthermore it is particularly dificult because the nature
of these environments themselves is not yet fully understood, making it harder to definitively judge
if generated images accurately resemble reality. While a more holistic approach might ofer greater
benefit, it too relies on potentially biased, often object-based, datasets. Future work should assess the
performance of such fine-tuned models within this unique planetary domain.
      </p>
      <p>Therefore, until a suficiently trustworthy and universal metric is established, anyone generating, and
especially using, synthetic data should meticulously examine it rather than solely relying on single
scores like FID, IS, KID. The details of such an examination are highly domain- and application-specific.
Some domains may have pre-defined statistics the generated images should match, or simulated data
that they can be compared with. But visual assessments even on small portions of data can highlight
underlying deficiencies, in the sense that the metric does not align with human perception.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>This research was supported by the Excellence Cluster ORIGINS which is funded by the Deutsche
Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy
EXC-2094 - 390783311.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used GPT-4 in order to: Grammar, spelling check
and paraphrase and reword. The author(s) reviewed and edited the content as needed and take(s) full
responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>H.</given-names>
            <surname>Viggh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Loughran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Rachlin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Allen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ruprecht</surname>
          </string-name>
          ,
          <article-title>Training deep learning spacecraft component detection algorithms using synthetic image data</article-title>
          ,
          <source>in: 2023 IEEE Aerospace Conference</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          . doi:
          <volume>10</volume>
          .1109/AERO55745.
          <year>2023</year>
          .
          <volume>10115578</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P.</given-names>
            <surname>Suwinski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Liesch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Schnitzer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kohlsmann</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.</surname>
          </string-name>
          <article-title>Janschek, 2d and 3d data generation and workflow for ai-based navigation on unstructured planetary surfaces</article-title>
          ,
          <source>in: AIAA SCITECH 2024 Forum</source>
          ,
          <year>2024</year>
          . URL: https://arc.aiaa.org/doi/abs/10.2514/6.2024-
          <fpage>1744</fpage>
          . doi:
          <volume>10</volume>
          .2514/6.2024-
          <fpage>1744</fpage>
          . arXiv:https://arc.aiaa.org/doi/pdf/10.2514/6.2024-
          <volume>1744</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Escalante Lopez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ghiglino</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Sanjurjo-Rivo, Applying machine learning techniques for optical relative navigation in planetary missions</article-title>
          ,
          <source>IEEE Transactions on Geoscience and Remote Sensing</source>
          <volume>62</volume>
          (
          <year>2024</year>
          )
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          . doi:
          <volume>10</volume>
          .1109/TGRS.
          <year>2024</year>
          .
          <volume>3374454</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>W.</given-names>
            <surname>Tu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Gedeon</surname>
          </string-name>
          ,
          <article-title>Toward a holistic evaluation of robustness in CLIP models</article-title>
          ,
          <year>2024</year>
          . URL: http://arxiv.org/abs/2410.01534. doi:
          <volume>10</volume>
          .48550/arXiv.2410.01534. arXiv:
          <volume>2410</volume>
          .01534 [cs],
          <source>version: 1.</source>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Torpey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Klein</surname>
          </string-name>
          ,
          <article-title>On the robustness of self-supervised representations for multi-view object classification 161 (</article-title>
          <year>2022</year>
          )
          <fpage>82</fpage>
          -
          <lpage>89</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/ S0167865522002276. doi:
          <volume>10</volume>
          .1016/j.patrec.
          <year>2022</year>
          .
          <volume>07</volume>
          .016.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Baharoon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Qureshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ouyang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Aljouie</surname>
          </string-name>
          , W. Peng,
          <article-title>Evaluating general purpose vision foundation models for medical image analysis: An experimental study of DINOv2 on radiology benchmarks, 2024</article-title>
          . URL: http://arxiv.org/abs/2312.02366. doi:
          <volume>10</volume>
          .48550/arXiv.2312. 02366. arXiv:
          <volume>2312</volume>
          .02366 [cs],
          <source>version: 4.</source>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Zhou</surname>
          </string-name>
          , RS-CLIP:
          <article-title>Zero shot remote sensing scene classification via contrastive vision-language supervision 124 (</article-title>
          <year>2023</year>
          )
          <article-title>103497</article-title>
          . URL: https://www.sciencedirect.com/ science/article/pii/S1569843223003217. doi:
          <volume>10</volume>
          .1016/j.jag.
          <year>2023</year>
          .
          <volume>103497</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>H.</given-names>
            <surname>Team</surname>
          </string-name>
          ,
          <article-title>Hirise image catalog, 2006-present</article-title>
          . URL: https://www.uahirise.org/, accessed:
          <fpage>2025</fpage>
          -05- 16.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A. S.</given-names>
            <surname>McEwen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Eliason</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. W.</given-names>
            <surname>Bergstrom</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. T.</given-names>
            <surname>Bridges</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. J.</given-names>
            <surname>Hansen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. A.</given-names>
            <surname>Delamere</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Grant</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. C.</given-names>
            <surname>Gulick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. E.</given-names>
            <surname>Herkenhof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Keszthelyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. L.</given-names>
            <surname>Kirk</surname>
          </string-name>
          , M. T. Mellon,
          <string-name>
            <given-names>S. W.</given-names>
            <surname>Squyres</surname>
          </string-name>
          , N. Thomas,
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Weitz</surname>
          </string-name>
          ,
          <article-title>Mars reconnaissance orbiter's high resolution imaging science experiment (HiRISE) 112 (</article-title>
          <year>2007</year>
          ). URL: https://onlinelibrary.wiley.com/doi/abs/10.1029/2005JE002605, publisher: John Wiley &amp; Sons, Ltd.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>L.</given-names>
            <surname>Richardson</surname>
          </string-name>
          , Beautiful soup documentation,
          <source>April</source>
          (
          <year>2007</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A. S.</given-names>
            <surname>McEwen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Banks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Baugh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Becker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Boyd</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. W.</given-names>
            <surname>Bergstrom</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Beyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Bortolini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. T.</given-names>
            <surname>Bridges</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Byrne</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Castalia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. C.</given-names>
            <surname>Chuang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. S.</given-names>
            <surname>Crumpler</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Daubar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Davatzes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. G.</given-names>
            <surname>Deardorf</surname>
          </string-name>
          , A. DeJong, W. Alan Delamere,
          <string-name>
            <given-names>E. N.</given-names>
            <surname>Dobrea</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Dundas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Eliason</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Espinoza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fennema</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. E.</given-names>
            <surname>Fishbaugh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Forrester</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. E.</given-names>
            <surname>Geissler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Grant</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Grifes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Grotzinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. C.</given-names>
            <surname>Gulick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. J.</given-names>
            <surname>Hansen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. E.</given-names>
            <surname>Herkenhof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Heyd</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. L.</given-names>
            <surname>Jaeger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kanefsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Keszthelyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>King</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. L.</given-names>
            <surname>Kirk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. J.</given-names>
            <surname>Kolb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lasco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lefort</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Leis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. W.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Martinez-Alonso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mattson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>McArthur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Mellon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Metz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Milazzo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. E.</given-names>
            <surname>Milliken</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Motazedian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. H.</given-names>
            <surname>Okubo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ortiz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Philippof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Plassmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Polit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. S.</given-names>
            <surname>Russell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Schaller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Searls</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Spriggs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. W.</given-names>
            <surname>Squyres</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Tarr</surname>
          </string-name>
          , N. Thomas,
          <string-name>
            <given-names>B. J.</given-names>
            <surname>Thomson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. L.</given-names>
            <surname>Tornabene</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Van Houten</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Verba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Weitz</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. J. Wray,</surname>
          </string-name>
          <article-title>The high resolution imaging science experiment (HiRISE) during MRO's primary science phase (PSP) 205 (</article-title>
          <year>2010</year>
          )
          <fpage>2</fpage>
          -
          <lpage>37</lpage>
          . URL: https://www.sciencedirect.com/science/article/ pii/S0019103509001808. doi:
          <volume>10</volume>
          .1016/j.icarus.
          <year>2009</year>
          .
          <volume>04</volume>
          .023.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Heusel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ramsauer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Unterthiner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Nessler</surname>
          </string-name>
          , S. Hochreiter,
          <article-title>GANs trained by a two timescale update rule converge to a local nash equilibrium</article-title>
          ,
          <year>2018</year>
          . URL: http://arxiv.org/abs/1706.08500. doi:
          <volume>10</volume>
          .48550/arXiv.1706.08500. arXiv:
          <volume>1706</volume>
          .08500 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Bińkowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. J.</given-names>
            <surname>Sutherland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Arbel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gretton</surname>
          </string-name>
          , Demystifying MMD GANs,
          <year>2021</year>
          . URL: http://arxiv.org/abs/
          <year>1801</year>
          .01401. doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>1801</year>
          .
          <volume>01401</volume>
          . arXiv:
          <year>1801</year>
          .
          <volume>01401</volume>
          [stat].
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>S.</given-names>
            <surname>Jayasumana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ramalingam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Veit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Glasner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chakrabarti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <surname>Rethinking</surname>
            <given-names>FID</given-names>
          </string-name>
          :
          <article-title>Towards a better evaluation metric for image generation</article-title>
          ,
          <year>2024</year>
          . URL: http://arxiv.org/abs/2401.09603. doi:
          <volume>10</volume>
          .48550/arXiv.2401.09603. arXiv:
          <volume>2401</volume>
          .09603 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>T.</given-names>
            <surname>Kynkäänniemi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Karras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Laine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehtinen</surname>
          </string-name>
          , T. Aila,
          <article-title>Improved precision and recall metric for assessing generative models</article-title>
          ,
          <year>2019</year>
          . URL: http://arxiv.org/abs/
          <year>1904</year>
          .06991. doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>1904</year>
          .
          <volume>06991</volume>
          . arXiv:
          <year>1904</year>
          .
          <volume>06991</volume>
          [stat].
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>R.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Isola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Efros</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Shechtman</surname>
          </string-name>
          ,
          <string-name>
            <surname>O. Wang,</surname>
          </string-name>
          <article-title>The unreasonable efectiveness of deep features as a perceptual metric</article-title>
          ,
          <year>2018</year>
          . URL: http://arxiv.org/abs/
          <year>1801</year>
          .03924. doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>1801</year>
          .
          <volume>03924</volume>
          . arXiv:
          <year>1801</year>
          .03924 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>S.</given-names>
            <surname>Barratt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <article-title>A note on the inception score</article-title>
          ,
          <year>2018</year>
          . URL: http://arxiv.org/abs/
          <year>1801</year>
          .
          <year>01973</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>1801</year>
          .
          <year>01973</year>
          . arXiv:
          <year>1801</year>
          .
          <year>01973</year>
          [stat].
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>M.</given-names>
            <surname>Abdel-Salam Nasr</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. F. AlRahmawy</surname>
          </string-name>
          , A. S.
          <article-title>Tolba, Multi-scale structural similarity index for motion detection 29 (</article-title>
          <year>2017</year>
          )
          <fpage>399</fpage>
          -
          <lpage>409</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/ S1319157816300088. doi:
          <volume>10</volume>
          .1016/j.jksuci.
          <year>2016</year>
          .
          <volume>02</volume>
          .004.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>S.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Tamir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sundaram</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Chai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , T. Dekel, P. Isola,
          <article-title>DreamSim: Learning new dimensions of human visual similarity using synthetic data, 2023</article-title>
          . URL: http://arxiv.org/abs/2306. 09344. doi:
          <volume>10</volume>
          .48550/arXiv.2306.09344. arXiv:
          <volume>2306</volume>
          .09344 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>A.</given-names>
            <surname>Obukhov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Seitzer</surname>
          </string-name>
          , P.-W. Wu,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhydenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kyl</surname>
          </string-name>
          , E. Y.
          <string-name>
            <surname>-J. Lin</surname>
          </string-name>
          ,
          <article-title>High-fidelity performance metrics for generative models in pytorch, 2020</article-title>
          . URL: https://github.com/toshas/torch-fidelity.
          <source>doi:10.5281/zenodo.4957738, version: 0.3</source>
          .0, DOI: 10.5281/zenodo.4957738.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>S.</given-names>
            <surname>Paul</surname>
          </string-name>
          , Cmmd:
          <article-title>Clip-based maximum mean discrepancy metric in pytorch, 2024</article-title>
          . URL: https: //github.com/sayakpaul/cmmd-pytorch, accessed:
          <fpage>2025</fpage>
          -06-08.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>T.</given-names>
            <surname>Kynkäänniemi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Karras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Aittala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Aila</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehtinen</surname>
          </string-name>
          ,
          <article-title>The role of ImageNet classes in fréchet inception distance</article-title>
          ,
          <year>2023</year>
          . URL: http://arxiv.org/abs/2203.06026. doi:
          <volume>10</volume>
          .48550/arXiv.2203.06026. arXiv:
          <volume>2203</volume>
          .06026 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>T.</given-names>
            <surname>Karras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Aittala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hellsten</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Laine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehtinen</surname>
          </string-name>
          , T. Aila,
          <article-title>Training generative adversarial networks with limited data, 2020</article-title>
          . URL: http://arxiv.org/abs/
          <year>2006</year>
          .06676. doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>2006</year>
          .
          <volume>06676</volume>
          . arXiv:
          <year>2006</year>
          .06676 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>T.</given-names>
            <surname>Karras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Laine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Aittala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hellsten</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehtinen</surname>
          </string-name>
          , T. Aila,
          <article-title>Analyzing and improving the image quality of StyleGAN, 2020</article-title>
          . URL: http://arxiv.org/abs/
          <year>1912</year>
          .04958. doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>1912</year>
          .
          <volume>04958</volume>
          . arXiv:
          <year>1912</year>
          .04958 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>L.</given-names>
            <surname>Muttenthaler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Gref</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Born</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Spitzer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kornblith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Mozer</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.-R. Müller</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Unterthiner</surname>
            ,
            <given-names>A. K. Lampinen,</given-names>
          </string-name>
          <article-title>Aligning machine and human visual representations across abstraction levels</article-title>
          ,
          <year>2024</year>
          . URL: http://arxiv.org/abs/2409.06509. doi:
          <volume>10</volume>
          .48550/arXiv.2409.06509. arXiv:
          <volume>2409</volume>
          .06509 [cs].
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>