<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>of Digital Twins of Earth⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>M. Tsokanaridou</string-name>
          <email>mtsokanaridou@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>J. Hackstein</string-name>
          <email>hackstein@tu-berlin.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>G. Hoxha</string-name>
          <email>genc.hoxha@tu-berlin.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>S.-A. Kefalidis</string-name>
          <email>skefalidis@di.uoa.gr</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>K. Plas</string-name>
          <email>kplas@di.uoa.gr</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>B. Demir</string-name>
          <email>demir@tu-berlin.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>M. Koubarakis</string-name>
          <email>koubarak@di.uoa.gr</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>M. Corsi</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>C. Leoni</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>G. Pasquali</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>C. Pratola</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>S. Tilia</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>N. Longépé</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>e-GEOS S.p.A.</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Italy</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>BIFOLD and Technische Universität Berlin</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Dept. of Informatics and Telecommunications, National and Kapodistrian University of Athens</institution>
          ,
          <country country="GR">Greece</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>-lab, ESA ESRIN</institution>
          ,
          <addr-line>Frascati</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>We present a new-generation, AI-agent-powered digital assistant featuring four specialized engines for satellite imagery: search by image, search by caption, visual question answering, and knowledge graph question answering. At the core of the system is a Task Interpreter, designed as a multi-agent system, which coordinates these engines to address complex user requests for Earth observation data. The Task Interpreter comprises four agents: an Engine Routing Agent that selects the appropriate engine or rejects unmanageable requests; a Conversational Agent that handles general or out-of-scope queries; an Argument Extraction Agent that identifies image type parameters for retrieval tasks; and a Tool Feasibility Agent that assesses the applicability of tools for domainspecific queries. This multi-agent system enables seamless interaction with Digital Twins of Earth, with an emphasis on modularity and extensibility to adapt to the rapid evolution of remote sensing technologies.</p>
      </abstract>
      <kwd-group>
        <kwd>knowledge graph question answering</kwd>
        <kwd>Multi-agent systems</kwd>
        <kwd>digital assistant</kwd>
        <kwd>digital twins</kwd>
        <kwd>search by image</kwd>
        <kwd>search by caption</kwd>
        <kwd>visual question answering</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In Artificial Intelligence (AI), an agent is an autonomous entity capable of perceiving its environment,
making decisions, and acting upon it to achieve specific goals. Multi-agent systems (MAS) is a subarea
of AI studying societies of agents in cooperative or competitive settings and has a long tradition
of outstanding research results. With the recent revolution of large language models (LLMs) and
foundation models (FMs), the area of MAS is receiving again a lot of attention with the proposal of
LLM-powered agent frameworks such as AutoGen [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], LangChain and CrewAI.
      </p>
      <p>
        As part of these recent developments, we have seen the proposal of agent and multi-agent system
architectures powered by LLMs in the Remote Sensing (RS) area [
        <xref ref-type="bibr" rid="ref2 ref3 ref4 ref5 ref6">2, 3, 4, 5, 6</xref>
        ]. Remote Sensing ChatGPT [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]
introduces a system where ChatGPT interprets user requests and sequentially invokes specialized RS
models for tasks such as object detection and land use classification. RescueADI [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] focuses on disaster
interpretation, employing a LLM-driven agent to dynamically plan and execute multiple specialized
tasks like damage assessment and rescue pathfinding. RS-Agent [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] extends this paradigm by integrating
high-performance tools and a retrieval-augmented knowledge base to support professional geospatial
analysis. GlobeFlowGPT [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] applies a multimodal LLM orchestrator to facilitate complex geospatial
workflows, including flood forecasting and vegetation monitoring, with containerized tool integration.
Similarly, GeoLLM-Squad [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] adopts a MAS, using an orchestrator to coordinate specialized agents for a
Workshop on AI-driven Data Engineering and Reusability for Earth and Space Sciences (DARES’25), co-located with the 28th
      </p>
      <p>CEUR
Workshop</p>
      <p>ISSN1613-0073
broad range of remote sensing tasks, such as urban monitoring, climate analysis, forestry protection,
and agricultural studies. Like our approach, it emphasizes modularity, extensibility, and the separation
of orchestration from task-solving components.</p>
      <p>Parallel to these developments, the emergence of Digital Twins of Earth (DTEs) —high-fidelity, dynamic
digital representations of the Earth’s systems—has created new demands for intelligent, continuous
interaction with massive Earth observation (EO) datasets. DTEs require the ability to access, interpret,
and integrate diverse data streams in a flexible, scalable, and context-aware manner. However, despite
recent advances, there is currently no EO data provider that ofers a digital assistant capable of guiding
users in finding the EO data they seek. This is a critical functionality gap, especially as the volume of
EO data made available through initiatives like Copernicus and Landsat continue to expand. Without
intelligent assistance, this wealth of data remains dificult to access for both expert and novice users.</p>
      <p>To address this challenge, we introduce the Digital Assistant for Digital Twins of Earth (DA4DTE), an
AI-powered multi-agent digital assistant designed to facilitate seamless interaction with EO datasets.
In DA4DTE, a Task Interpreter operates as a multi-agent system comprising specialized agents that
collaboratively interpret user requests and orchestrate the activation of appropriate search engines or
tools. We distinguish between the specialised engines serving EO tasks, the multi-agent Task Interpreter
with its agents—autonomous functional components responsible for specific subtasks—and the assistant,
the overall user-facing system deployed to fulfill complex information retrieval workflows. We make
our system publicly available at https://github.com/rsim-tu-berlin/DA4DTE/.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Multi-Agent System for Orchestration</title>
      <p>DA4DTE enables a user to pose multi-modal requests, that —in addition to text— can include RS images,
either uploaded or selected on the User Interface map. The assistant’s toolset allows for a variety of
requests including geospatial or visual queries, requests for images by describing their visual context or
metadata, image search requests, and queries for explanation on image similarity results. Between the
user and the DA4DTE engines lies the Task Interpreter : a MAS responsible for engine orchestration and
the mediation between the user and individual engines. The architecture is illustrated in Figure 1, which
highlights the collaborative roles of each agent module and their interactions with the user interface
and underlying engine components.</p>
      <p>
        To ensure future extensibility, we categorize orchestration responsibilities into two types: core and
assistant tasks. Core tasks are permanent and fundamental to any version of the assistant, regardless of
the tools or data sources integrated. In contrast, assistant tasks are tailored to the current implementation
state and may evolve as functionalities and resources expand. Each task is assigned to a dedicated
agent, forming a MAS, implemented using the AutoGen [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] framework and currently comprising the
following four agents.
      </p>
      <p>The first agent is the Engine Routing Agent (Core). This agent is a zero-shot prompted1 LLM that
selects the most appropriate engine to activate based on the user’s request. It also has the capability to
reject requests that fall outside the scope of all available engines.</p>
      <p>The second agent is the Conversational Agent (Core). This is a fallback conversational agent
designed to handle general, ambiguous, or out-of-domain queries. Although it is a capable LLM, it
is specifically prompted not to respond to irrelevant requests, ensuring that the assistant remains
task-focused.</p>
      <p>The third agent is the Argument Extraction Agent (Assistant). This is an agent dedicated to
extracting key parameters required by specific tools. In the current implementation, it identifies the
requested image type (e.g., Sentinel-1 or Sentinel-2) when the Search-by-Image engine is activated.</p>
      <p>Finally, the fourth agent is the Tool Feasibility Agent (Assistant). This is a utility agent responsible
for validating whether a requested operation is feasible under current system capabilities. For example,
the Search-by-Text engine presently supports only vessel-related queries. If a user request falls outside
this domain, the agent triggers a relevant explanatory message to the user.</p>
      <sec id="sec-2-1">
        <title>1All prompts are available in the code repository.</title>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Engines and their Functionalities</title>
      <sec id="sec-3-1">
        <title>DA4DTE integrates four specialized engines:</title>
        <p>
          The first engine is the Knowledge Graph QA Engine TerraQ [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. TerraQ2 is a QA system that is
designed to process natural language requests that include spatiotemporal or metadata related criteria
and satisfy the request by retrieving data from a Knowledge Graph (KG). User requests can include
references to image metadata (e.g., snow percentage in an image), geoentities (e.g., the country France),
administrative divisions (e.g., municipalities, regions), as well as spatiotemporal constraints.
        </p>
        <p>For example, users can make requests like “Find 10 images of Piedmont with cloud coverage under
20% and more than 50% vegetation, taken in August 2022” (outputs shown in Figure 2). The engine
then takes this request as input, translates it into a semantically equivalent SPARQL query. To do so,
it employs a pipeline of components for natural language understanding and KG grounding. First,
relevant entities and classes are extracted from the KG. Then, relations between the retrieved entities
and classes are identified, including spatial and temporal relations. At this stage, the core of the query
is complete, and the expected return values are identified by a finetuned Llama 2 model. The query
generator then produces the complete, executable SPARQL query. This query is subsequently enhanced
by a finetuned on SPARQL Mistral-7b-v2 model, and rewritten to optimize execution eficiency by
replacing GeoSPARQL functions with equivalent materialized topological predicates. In the end, the
query is executed over a GraphDB endpoint, and the QA process is complete.</p>
        <p>
          The second egine is the Search-by-Image Engine. This engine takes a query image and computes
the similarity function between the query image and all archive images to find the most similar images
to the query in a scalable way. This is achieved based on two main steps: i) the image description
step, which characterizes the spatial and spectral information content of RS images; and ii) the image
retrieval step, which evaluates the similarity among the considered hash codes and then retrieves images
similar to a query image in the order of similarity. Our Search-by-Image Engine is defined based on
two self-supervised methods: 1) deep unsupervised cross-modal contrastive hashing (DUCH) [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]; and
2) cross-modal masked autoencoder (CM-MAE) [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. For both methods, the image description step is
composed of two modules: 1) a feature extraction module, which learns deep feature representations
of RS images by exploiting visual transformers (ViT); and 2) a deep hashing module, which learns
to map image representations into hash codes. The first module of the DUCH method is based on
contrastive self-supervised image representation learning, while that of the CM-MAE method is based
on unsupervised masked image modelling. The second module of each method employs a hashing
subnetwork with binarization loss functions. Our engine has both the single-modal (also known as
uni-modal) and cross-modal content-based image retrieval capability due to the consideration of the
modality-specific encoders. Example outputs are shown in Figure 3.
        </p>
        <p>
          A key feature of the search-by-image engine is the integration of the Explainability tools to understand
and explain the decision of the engine in retrieving a particular image given a query image. To this end,
we incorporate two explainability tools: Layer-wise Relevance Propagation (LRP) [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] and BiLRP [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
The LRP highlights areas in the input image supporting a specific class decision by generating heatmaps.
Since CM-MAE is self-supervised and lacks class predictions, we train an auxiliary classification
head to estimate class probabilities for each image pair. These predictions enable the generation and
interpolation of class-specific LRP heatmaps, which emphasize semantically similar regions across
image pairs. BiLRP, while more computationally intensive, identifies in the image pairs shared regions
without needing a classification head.
        </p>
        <p>
          The third engine is the Search-by-Text Engine. This engine takes a text sentence as a query and
eficiently retrieves the most similar images to the query text, achieving scalable cross-modal text-image
retrieval. The Search-by-Text Engine is developed by adapting the above-mentioned self-supervised
DUCH [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] to be operational on text based queries. To this end, the feature extraction module is
adapted to extract feature representations of image-text pairs by exploiting bidirectional transformers
(e.g., BERT [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]) as text-specific encoders together with ResNet-152 [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] as image-specific encoders.
The second module of each method is adapted to learn cross-modal binary hash codes for image and
text modalities by simultaneously preserving semantic discrimination and modality-invariance in an
end-to-end manner. Example outputs are shown in Figure 4.
        </p>
        <p>To evaluate DUCH, we constructed a vessel captioning dataset, consisting of vessel text-image
pairs generated via a template-based image captioning approach. This approach consists of creating
predefined sentence templates with empty slots. The slots are then filled using semantic cues from
vessel bounding boxes (e.g., count, size) and contextual data from OpenStreetMap, particularly coastline
proximity (i.e., vessel locations relative to harbors or coastlines). Vessel sizes, derived from bounding box
dimensions, were categorized into five classes (very small to very big) and mapped to two vessel types:
boats (very small to medium) and ships (big and very big), reflecting typical usage and navigational
context.</p>
        <p>
          Finally, the fourth engine is the Visual QA Engine. This engine enables users to ask questions
about the content of RS images in a free-form manner, extracting valuable information. It employs the
LiT-4-RSVQA [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] model, which has been trained and evaluated on RSVQAxBEN3. The LiT-4-RSVQA
architecture focuses on achieving state-of-the-art performance, while also providing rapid response
times. To do so, it employs the following modules: i) a lightweight text encoder module; ii) a lightweight
image encoder module; iii) a fusion module; and iv) a classification module. A RS image I and a question
Q about this image are considered as input. The encoder modules produce vector representations
which are subsequently passed to the fusion module. The feature fusion module consists of two linear
projections and a modality combination. The projections map the two modalities with dimensions dt
and dv into a common dimension df, where dt and dv denote the dimensions of the flattened output
of the text and image encoder modules, respectively. The projected features are then elementwise
multiplied. The classification module is defined as an MLP projection head.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. DA4DTE in action</title>
      <p>We now consider a use case scenario for the digital assistant. The assistant welcomes the user and
asks them to pose a request. The user asks for a Sentinel-1 image from France during 2020, with
snow coverage of more than 50%. Then, the Engine Routing Agent of the Task Interpreter decides
that this is a request that should be fulfilled by the Knowledge Graph QA Engine which returns the
appropriate image. The interaction goes on with the user asking for a similar Sentinel-2 image and
then the Search-by-Image Engine is selected by the Engine Routing Agent. The term “Sentinel-2” is
extracted by the Argument Extraction Agent as the modality argument, so the engine is activated and
returns the appropriate image. Having selected that Sentinel-2 image, the user asks whether it presents
a rural area and the answer by the Visual QA Engine is presented. Finally, the user closes the interaction
with the assistant and the Engine Routing Agent of the Task Interpreter calls the Conversational Agent
to answer appropriately. In the example, the agents managed to successfully coordinate and address
the requests of the user which were clearly explained. In cases where the user intent is not clear, the
assistant requests that the user elaborates on their request.</p>
      <p>As far as routing goes, on most cases the system does not invoke an incorrect engine, both because
the capabilities and responsibilities of each engine are clear and because the expected input between
engines difers. The only two engines that expect the same input are the Knowledge Graph QA engine
and the Search-by-Text engine, where if the wrong engine is selected, no outputs are returned and the
assistant acts as if no valid result could be found.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Future Work</title>
      <p>We plan to explore several research directions to further improve the capabilities of the system. First of
all, we aim to implement an alternative Engine Routing Agent using the Function Calling paradigm in
LLMs, to improve control over engine invocation compared to the current zero-shot prompting setup.
We also plan to extend the assistant’s capabilities to multi-step requests where multiple engines can be
activated in a sequence. As the complexity of the system increases, we intend to integrate a Manager
Agent to oversee and coordinate the behavior of all other agents within the Task Interpreter.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used the GPT family of models as well as Grammarly
for grammar and spelling check. After using these tools/services, the authors reviewed and edited the
content as needed and take full responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Wu</surname>
          </string-name>
          , et al.,
          <article-title>AutoGen: Enabling next-gen LLM applications via multi-agent conversation framework (</article-title>
          <year>2023</year>
          ). https://arxiv.org/abs/2308.08155.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>H.</given-names>
            <surname>Guo</surname>
          </string-name>
          , et al.,
          <article-title>Remote Sensing ChatGPT: Solving remote sensing tasks with ChatGPT and visual models</article-title>
          ,
          <source>in: IGARSS</source>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Kononykhin</surname>
          </string-name>
          , et al.,
          <article-title>From data to decisions: Streamlining geospatial operations with multimodal GlobeFlowGPT</article-title>
          , in: ACM SIGSPATIAL,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>C.</given-names>
            <surname>Lee</surname>
          </string-name>
          , et al.,
          <article-title>Multi-agent geospatial copilots for remote sensing workflows (</article-title>
          <year>2025</year>
          ). https://arxiv. org/abs/2501.16254.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          , et al.,
          <article-title>RescueADI: Adaptive disaster interpretation in remote sensing images with autonomous agents</article-title>
          ,
          <source>TGRS</source>
          (
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>W.</given-names>
            <surname>Xu</surname>
          </string-name>
          , et al.,
          <string-name>
            <surname>RS-Agent</surname>
          </string-name>
          :
          <article-title>Automating remote sensing tasks through intelligent agents (</article-title>
          <year>2024</year>
          ). https://arxiv.org/abs/2406.07089.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kefalidis</surname>
          </string-name>
          , et al.,
          <article-title>TerraQ: Spatiotemporal question-answering on satellite image archives</article-title>
          ,
          <source>in: IGARSS</source>
          ,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>G.</given-names>
            <surname>Mikriukov</surname>
          </string-name>
          , et al.,
          <article-title>Unsupervised contrastive hashing for cross-modal retrieval in remote sensing</article-title>
          ,
          <source>in: ICASSP</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Hackstein</surname>
          </string-name>
          , et al.,
          <article-title>Exploring masked autoencoders for sensor-agnostic image retrieval in remote sensing</article-title>
          ,
          <source>TGRS</source>
          <volume>63</volume>
          (
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bach</surname>
          </string-name>
          , et al.,
          <article-title>On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation</article-title>
          ,
          <source>PloS one 10</source>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>O.</given-names>
            <surname>Eberle</surname>
          </string-name>
          , et al.,
          <article-title>Building and interpreting deep similarity models</article-title>
          ,
          <source>PAMI</source>
          <volume>44</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          , et al.,
          <article-title>BERT: pre-training of deep bidirectional transformers for language understanding</article-title>
          ,
          <source>in: NAACL</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>K.</given-names>
            <surname>He</surname>
          </string-name>
          , et al.,
          <article-title>Deep residual learning for image recognition</article-title>
          ,
          <source>in: CVPR</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>L.</given-names>
            <surname>Hackel</surname>
          </string-name>
          , et al., LiT-4-RSVQA:
          <article-title>Lightweight transformer-based visual question answering in remote sensing</article-title>
          ,
          <source>in: IGARSS</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>