<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The Hazard Mapping and Vulnerability Monitoring (HaMMon) Post-Event Analysis Pipeline</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Leonardo Pelonero</string-name>
          <email>leonardo.pelonero@inaf.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mauro Imbrosciano</string-name>
          <email>mauro.imbrosciano@inaf.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eva Sciacca</string-name>
          <email>eva.sciacca@inaf.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fabio Vitello</string-name>
          <email>fabio.vitello@inaf.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alessandra Casale</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefano Stalio</string-name>
          <email>stefano.stalio@lngs.infn.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sandra Parlati</string-name>
          <email>sandra.parlati@lngs.infn.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>INAF Astrophysical Observatory of Catania</institution>
          ,
          <addr-line>Catania</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>INFN Laboratori Nazionali del Gran Sasso</institution>
          ,
          <addr-line>Assergi L'Aquila</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>In recent years, the monitoring and study of natural hazards have gained significant attention, particularly due to climate change, which exacerbates incidents like floods, droughts, storm surges, and landslides. Together with the constant risk of earthquakes, these climate-induced events highlight the critical necessity for enhanced risk assessment and mitigation strategies in susceptible areas such as Italy. In this work, we present a portable and fully automated pipeline for post-event analysis on High-Performance Computing (HPC) infrastructures. Our methodology integrates Photogrammetry techniques, Data Visualization and Artificial Intelligence technologies to analyze final high-resolution 3D models of areas afected by natural disasters. This process enables the fusion and association of heterogeneous information directly onto the geometric data, creating a semantically enriched model useful to assess extreme natural events and evaluate their impact on risk-exposed assets.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Post-Event Analysis</kwd>
        <kwd>Hazard Mapping</kwd>
        <kwd>Machine Learning</kwd>
        <kwd>Photogrammetry</kwd>
        <kwd>Mesh segmentation</kwd>
        <kwd>Texture 3D models</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The monitoring and analysis of natural hazards have gained increasing relevance in recent years, as
their frequency and intensity continue to rise, particularly under the influence of climate change, which
intensifies the frequency and severity of events such as floods, droughts, storm surges, and landslides.
These climate-driven phenomena highlight the urgent need for more efective risk assessment and
mitigation strategies, especially in highly vulnerable areas like Italy.</p>
      <p>Hazard mapping plays a central role in this context, as it enables the identification and spatial
representation of risk-prone areas. Having a ready-to-use and easily deployable analysis pipeline is
of paramount importance when responding to disastrous events. To address this need, automated
pipelines that orchestrate photogrammetric workflows and AI-driven image preprocessing are essential
to support timely decision-making.</p>
      <p>The National Institute for Astrophysics (INAF), together with other national institutions, is partner
in the HaMMon (Hazard Mapping and vulnerability Monitoring) project — an initiative launched
within the Italian National Research Centre for High Performance Computing, Big Data and Quantum
Computing (ICSC)1 and coordinated by UnipolSai2. The project aims to develop advanced tools and
methodologies for the management of natural hazards, addressing all aspects from risk assessment to
post-event analysis, including intervention planning and damage estimation.</p>
      <p>In this paper, we present a portable and reusable post-event analysis framework based on
Photogrammetry, AI and Data Visualization into a single orchestrated pipeline within an HPC platform, to assess
extreme natural events and analyze their efects on assets at risk.</p>
      <p>The next Section 2 details the HaMMon computing platform, outlining its dual architecture composed
of a Kubernetes cluster for scalable, containerized workloads and Slurm-based HPC cluster for batch
processing. Section 3 outlines the overall post-event analysis pipeline of our proposed approach, from
UAV images acquisition to web application. Section 4 shows our preliminary results. Section 5 draws
our conclusions and outlines future works.</p>
    </sec>
    <sec id="sec-2">
      <title>2. HaMMon HPC Infrastructure</title>
      <p>The HaMMon computing platform is hosted on INFN resources at the Laboratori Nazionali del Gran
Sasso (LNGS) and managed by the LNGS Computing and Network Service. The LNGS HPC infrastructure
comprises two primary clusters: a Kubernetes3 cluster dedicated to the HaMMon project, augmented
by central services developed by the INFN Data Cloud working group, and a general purpose
SLURMbased4 HPC cluster, available for HaMMon research groups. The LNGS HPC infrastructure is connected
at 10Gb/s to the national research network GARR5; this connection will upgrade at 100Gb/s in the next
few months.</p>
      <p>The following sections provide a technical description of the two clusters depicted in Figure 1.</p>
      <sec id="sec-2-1">
        <title>2.1. Kubernetes Cluster on OpenStack</title>
        <p>The cluster is built on OpenStack6 virtual machines and orchestrated with Rancher’s RKE27. High
availability is ensured by three master nodes, while worker nodes can be scaled dynamically to match
workload demands. Deployment and lifecycle management are fully automated via a Puppet module
developed by INFN. One hypervisor equipped with four Nvidia H100 GPUs is reserved for the HaMMon
workloads: each Nvidia H100 GPU is accessible through a Kubernetes worker node.</p>
        <p>User authentication is handled by a custom INFN webhook built upon the oficial Kubernetes OIDC
plugin (kubelogin 8) and backed by the INFN Identity and Access Management system (INFN-IAM).
3https://kubernetes.io/
4https://slurm.schedmd.com/
5https://www.garr.it/it/
6https://www.openstack.org/
7https://docs.rke2.io/
8https://github.com/int128/kubelogin
Fine-grained, multi-tenant access control is enforced through Capsule9, so that each institute operates
in an isolated workspace (called capsule tenant) with its own namespaces, Network File System (NFS)
storage, and applications.</p>
        <p>The storage services comprise two tiers. For short-term needs, an NFS CSI provisioner delivers more
than 20 TB of storage; the migration from NFS to a CephFS10 backend is planned in the next months.
For long-term retention, an S3-compatible object store on INFN-LNGS resources supports large-scale
data retention across all HaMMon partners.</p>
        <p>A Harbor11 registry, deployed and managed by INFN Data Cloud and secured by INFN credentials,
hosts Docker12 images for all users.</p>
        <p>To safeguard user-deployed services, at the moment a VPN gateway (integrated with INFN-IAM)
restricts access so that no user service is directly exposed to the public Internet.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. SLURM-Based HPC Cluster</title>
        <p>The SLURM batch-mode HPC cluster consists of 72 CPU nodes (Lenovo NeXtScale nx360 M5) plus
one node hosting four NVIDIA A100 GPUs. A significant increase in computing power is planned in a
few months with the addition of 300 CPU nodes and five nodes hosting four Nvidia H100 GPUs each.
Interconnects include a 100 Gb/s Intel Omni-Path network, a 100 Gb/s Ethernet fast interconnection
network, and, in the next few months, a 400 Gb/s Infiniband network.</p>
        <p>The storage architecture features a high-performance shared filesystem of about 350 TB accessible by
all nodes. A significant increase of the order of 5 PB in available storage and performance improvement
is planned in a few months. Long-term data retention is ensured by an archival backup system providing
5 PB of LTO9 tape.</p>
        <p>The software stack is based on Linux Rocky13 8.6 and includes compiler suites such as OpenMPI14
and Intel MPI15, together with core libraries like MPI and CUDA and a broad set of scientific packages
managed via Spack16. User environments are configured through environment modules, while SLURM
handles job scheduling, priority management, and accounting.</p>
        <p>Two public SSH login nodes and two Jupyter Notebook17 gateway nodes provide command-line and
web-based access, respectively. Authentication uses INFN-LNGS credentials; authorization leverages
UNIX group membership with a shared project filesystem for group members.</p>
        <p>The LNGS HPC infrastructure health and performance are monitored via Checkmk18 at the cluster
level, Prometheus19 and Grafana20 collect and visualize in-cluster metrics, while accounting tools track
resource usage. Together, these two complementary clusters deliver to the HaMMon project a flexible,
scalable, and secure HPC environment.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Post-event analysis pipeline</title>
      <p>The unpredictable nature of damage from natural disasters to assets and infrastructure makes
postevent assessment a critical yet complex task for stakeholders like insurance companies and public
administrations.
9https://projectcapsule.dev/
10https://docs.ceph.com/
11https://goharbor.io/
12https://www.docker.com/
13https://rockylinux.org/
14https://www.openmp.org/
15https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library.html
16https://spack.io/
17https://jupyter.org/
18https://checkmk.com/it
19https://prometheus.io/
20https://grafana.com/</p>
      <p>A significant obstacle in this process is the lack of standardized methodologies capable of addressing
the diverse impacts of diferent catastrophic events. To overcome this limitation, this work introduces
a versatile pipeline designed to detect and to evaluate results that are applicable across various types
of natural hazards. The primary objective is to significantly assist damage assessment, reducing the
time-consuming and costly on-site inspections.</p>
      <p>
        The pipeline involves the following steps depicted in Figure 2 (see also [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] for more details):
1. UAV Images Acquisition To ensure both efectiveness and precision in the final results, the
initial phase involves planning and execution of drone surveys over the afected area. This process
includes delineating the area of interest and setting drone flight specifics—such as altitude, flight
path, and coverage area.
2. Photogrammetry The collected imagery is used to generate high-resolution (centimeter-scale)
3D models using the “Aerial Structure-from-Motion” (ASfM) techniques [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]. This
photogrammetric approach autonomously determines the geometry of the area, as well as the position and
orientation of the cameras. By leveraging overlapping images captured from multiple viewpoints,
it enables the reconstruction of detailed 3D models and produces georeferenced maps of the
examined area (see [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] for more details).
3. Machine Learning The process is complemented by Artificial Intelligence (AI) algorithms that
extract features from aerial images and integrate them into the model, enriching it with semantic
information.
4. Web App These steps aim to detect and classify damages in 3D models, ofering a scalable
framework for post-disaster analysis. The dataset of augmented digital twins models provides
stakeholders and claims adjusters with detailed visual references for remote damage assessment
and is accessible through a user-friendly web application. The platform allows for the immediate
3D visualization of the analysis results, powered by Cesium21, facilitating a prompt understanding
of the damage location and extent. Furthermore, the resulting data are downloadable, allowing
further ofline analysis and integration into other systems.
      </p>
      <sec id="sec-3-1">
        <title>3.1. Machine Learning: implementation</title>
        <p>
          The HaMMon project employs machine learning algorithms to enrich 3D photogrammetric models with
semantic information extracted from aerial imagery (see [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] for more details). Deep learning models
generate black and white segmentation masks that highlight features such as buildings, roads, and
damaged or flooded areas.
        </p>
        <p>
          At this stage, we rely on publicly available datasets for semantic segmentation of post-disaster
scenarios, such as RescueNet [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] and FloodNet [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] , to train our models. We selected the Tiramisu [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] and
21CesiumJS: https://cesium.com/platform/cesiumjs/
Attention-UNet [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] Convolutional Neural Networks (CNN) for their good balance between performance
and computational requirements, with the latter showing superior results.
        </p>
        <p>Neural network training is performed on the SLURM cluster, leveraging high-capacity GPUs, such
as the NVIDIA A100 which count 80 GB of memory each. The ability to use high-capacity GPUs is
critical for enabling efective batch normalization, a crucial technique to stabilize training and mitigate
overfitting. This is particularly important in the context of high-resolution UAV imagery, where images
cannot be significantly downscaled or cropped without losing critical details, and CNNs, which are
inherently memory-intensive. The use of libraries like PyTorch22Distributed Data Parallel (DDP) further
enhances eficiency by enabling training distribution across multiple processes, efectively combining
the memory of diferent GPUs.</p>
        <p>Model predictions are performed on the Kubernetes cluster to be integrated into the photogrammetry
pipeline. Since inference is executed each time on the entire dataset, it is crucial to run it in parallel
across multiple images, leveraging vectorization and GPU acceleration. Diferent steps—such as image
downscaling, patch division, and mask upscaling—are carried out on separate pods, thereby freeing
GPU resources for photogrammetry tasks.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. SfM: parallel implementation</title>
        <p>This section outlines the architectural design and implementation for deploying the photogrammetry
workflow based on Agisoft Metashape within a Kubernetes framework. The work focused on three key
engineering objectives for workflow orchestration:
• Creating a compatible and portable execution environment for Agisoft Metashape. This involves
packaging Metashape and all its dependencies into Docker images. This approach guarantees that
the workflow executes reliably and predictably, regardless of the specific hardware configuration
of a node, while also simplifying deployment.
• Developing a Python-based workflow to eficiently manage the parallel execution of Metashape.</p>
        <p>This project involves designing and implementing a master-worker architecture in Python,
following Agisoft’s oficial documentation 23 for distributed processing that adapts to any number
of available worker nodes.
• Prioritizing resource management and performance optimization across the diverse Kubernetes
cluster. This includes analyzing the resource consumption (CPU, RAM, GPU) of the Metashape
workflow on Kubernetes infrastructure. The goal is to devise strategies that maximize throughput
while navigating the cluster’s heterogeneous nature and the Agisoft Metashape floating license
limitation.</p>
        <p>
          The application software is packaged through a modular Docker containerization strategy, ensuring
lfexibility and maintainability. License management is addressed with a pragmatic and transparent
approach based on manual scaling of worker deployments, giving the operator full control over active
computational resources. The complete details of the implementation can be found in [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
        </p>
        <p>
          To validate the eficiency and scalability of the parallel and distributed task management
implementation, an experimental validation was conducted at the cloud-HPC infrastructure based on OpenStack
and Kubernetes, implemented in the green HPC4AI@UNITO data center at the University of Turin [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ].
The entire photogrammetric workflow, from image import to final model export, was executed on a
cluster comprising three worker nodes equipped with Tesla T4 GPUs with 16 GB of memory. This
evaluation utilized a dataset consisting of approximately 2,099 UAV images, representative of Tredozio’s
Square, obtained following a survey specifically conducted to document and analyze the impacts of
landslides, seismic activity, and flooding caused by the Tramazzo river.
22https://pytorch.org/
23How to configure the network processing (Agisoft Helpdesk Portal): https://agisoft.freshdesk.com/support/solutions/articles/
31000145918-how-to-configure-the-network-processing
        </p>
        <p>The results demonstrated a significant acceleration in processing times compared to single-node
executions. Figure 3 presents a bar chart that directly compares the total processing times and performance
of the Structure from Motion (SfM) workflow processing the same dataset. The chart compares the total
processing time of the entire workflow executed with two diferent methods: a standard standalone
execution on a single node (baseline, shown in light blue) and our distributed network process using a
master-worker architecture (shown in dark blue). The standalone execution completed the task in 21
hours and 40 minutes, a result that is comparable to the 20 hours taken by our distributed architecture
using only a single worker node. The key benefits and strong scalability of our implementation become
evident when multiple workers participate in the process. As shown in the Figure, by introducing a
second worker the execution time is nearly halved to approximately 12 hours, and with a third worker
it is further reduced to under 10 hours, confirming the eficiency of our parallelization strategy.</p>
        <p>This visual comparison clearly illustrates how the processing times are efectively halved as additional
computational nodes are introduced, highlighting the direct benefits of parallelization.</p>
        <p>It should be noted that the experimental validation on this initial infrastructure was limited to three
workers due to the practical constraints imposed by the commercial Agisoft Metashape floating license.</p>
        <p>The application has since been migrated to the HaMMon HPC Infrastructure reported in Section 2.</p>
        <p>In the next Section 4, we present the quantitative results of the training process for the ML module
and a comparative analysis of two alternative approaches for 3D semantic classification. The two
methods were evaluated on the same use case: the first performs segmentation on 3D model, while the
second uses classification primitives on tiled models for the CesiumJS platform.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <sec id="sec-4-1">
        <title>4.1. Digital Twin products</title>
        <p>The dataset for this study was acquired during a survey conducted in the municipality of Tredozio
(province of Forlì-Cesena, Italy) in October 2024. This survey was carried out as part of the HaMMon
project, coordinated by INAF-OACT, UNIMIB and ENEA24.</p>
        <p>
          Data acquisition was performed using Unmanned Aerial Vehicles (UAVs) equipped with a camera and
Real-Time Kinematic (RTK) modules to produce directly georeferenced imagery. The overlapped images
obtained are fundamental for UAV-based photogrammetry, emerging as a particularly efective solution
for disaster management in remote sensing [
          <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
          ]. This capability is fundamental for post-event
analysis, timely risk assessment, and efective mitigation measures planning.
24Università Degli Studi Milano-Bicocca (UniMiB) and Agenzia nazionale per le Nuove Tecnologie, l’Energia e lo Sviluppo
Economico Sostenibile (ENEA) are project partners.
        </p>
        <p>(a) Tiled Model
(b) Colored Point Cloud
(c) ML-generated segmentation mask for building and (d) Point cloud classification of buildings (red) and road
road detection (grey)</p>
        <p>Data acquired on-site in Monte Busca area were processed using the HaMMoN platform. Figure 4
summarizes the primary outputs generated through our workflow. The top-left quadrant shows the
high-detail tiled 3D model. The model clearly reveals the impact of the landslide. The afected area
was estimated to be approximately 30 meters in width and 110 meters in depth. The top-right quadrant
displays the corresponding colorized dense point cloud. The result lacks fine detail due to the presence
of dense vegetation, which introduces irregularities and noise in the photogrammetric reconstruction
process.</p>
        <p>The Figure 4 also presents the preliminary outputs from the integrate Artificial Intelligence
classification module. In the bottom-left quadrant, represents how the binary segmentation masks generated by
the module fit precisely on the original UAV images. The bottom-right quadrant shows the classified
point cloud, where features such as buildings and roads are identified in red and grey, obtained by
projecting all computed masks from the input aligned cameras onto the 3D model. The results are
noticeably afected by the fact that the input images difer significantly from the training domain, both
in terms of represented objects and image characteristics such as angle and distance. The development
is still at an early stage, and more robust models with improved generalization capabilities will be
employed in future iterations. Nevertheless, these outcomes ofer a solid starting point for future
development and serve as a valuable test for assessing the integration and coherence of the automated
workflow.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Machine Learning Outcomes</title>
        <p>
          The main challenges encountered in training deep learning models for UAV-based post-disaster analysis
were the limited number of available images and the strong imbalance among semantic classes. To
mitigate the scarcity of training examples, we explored transfer learning [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] by pre-training networks
on the larger RescueNet dataset and then fine-tuning them on FloodNet. We tested three fine-tuning
strategies—TL0, TL1, and TL2—difering in the number of frozen layers: TL0 retrains the entire network,
TL2 updates only the final classification layer, while TL1 represents an intermediate strategy, defined
specifically for each architecture.
        </p>
        <p>Quantitative results are reported (see Tab. 1) in terms of accuracy and mean Intersection over Union
Model
Tiramisù
Attention U-Net
Model
Tiramisù
Tiramisù
Tiramisù
Tiramisù
Attention U-Net
Attention U-Net
Attention U-Net
Attention U-Net
(mIoU) 25. The best performances were consistently achieved with TL0, where the entire network was
ifne-tuned on FloodNet, both for Tiramisù and Attention U-Net. This confirms that re-adapting all
convolutional filters is beneficial, as the two datasets, though similar, have noticeable domain diferences.
Overall, transfer learning experiments outperformed the baselines, underlining the impact of limited
training data.</p>
        <p>As shown in the confusion matrices (see, as an example, Fig. 5) the networks generally perform
well, with the most significant misclassifications occurring in categories that involve a degree of
subjectivity—such as the damage scale of buildings—or that depend on the observer’s viewpoint, as in
the case of roads that may appear either “blocked” or “clear” depending on the angle of the image.</p>
        <p>Despite the diferences in acquisition setups—RescueNet and FloodNet provide strictly nadiral UAV
views, whereas HaMMon relies on multi-angle imagery for photogrammetry—the training phase yielded
encouraging results and provides a solid baseline for validating the workflow and demonstrate the
feasibility of integrating machine learning within the HaMMon pipeline.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Semantic Visualization of 3D Models</title>
        <p>
          The true potential of this workflow is realized by enriching the geometric models from Structure from
Motion (SfM) with semantic context from Machine Learning (ML). This integration process transforms
bidimentional data segmentation labels into an intuitive visual 3D representation, highlighting specific
features identified during the analysis. The two distinct methodologies detailed in this section are
extensions of our previously validated, Python-based photogrammetric workflow [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], designed to
bridge the gap between semantic data and its final 3D representation. This section details two distinct
methodologies that use the classified point cloud as input to achieve this integration 26: (1) direct mesh
recolorization for static, high-fidelity models, and (2) dynamic styling of tiled 3D models for scalable
web-based applications.
        </p>
        <sec id="sec-4-3-1">
          <title>4.3.1. 3D Wireframe from dense cloud classification</title>
          <p>This first approach embeds the segmentation data directly into the 3D model texture by altering the
vertex colors of the source dense point cloud within the photogrammetric pipeline. This method is
particularly suited for generating self-contained mesh where classified features must be permanently
25Accuracy measures the fraction of correctly classified pixels, while mIoU is the mean ratio of overlap between predicted and
ground truth regions across classes.
26The source code and implementation details for both methodologies are publicly available in our GitHub repository:
https://github.com/Fliki1/Cesium-3D-Tile-ClassificationPrimitive.
and clearly delineated. The workflow was implemented using Agisoft Metashape Python module and
consists of the following steps.</p>
          <p>Initially, a subset of the dense point cloud, containing only the points belonging to a desired class
(e.g., “building”), is exported. The color attributes of these points are then automatically overwritten.
A Python script was developed to modify the RGB values of the points based on their classification
ID. The mapping between class IDs and their new colors is defined in an external user-editable JSON
configuration file, allowing for flexible color assignments.</p>
          <p>Once recolored, this dense point cloud subset is re-imported into the Metashape project. The 3D
model generation process is then executed, with the point cloud set as the data source for the mesh. The
algorithm thus generates the new mesh surface by interpolating the points from the modified cloud,
projecting the color attributes of the points onto the corresponding areas of the model’s texture.</p>
          <p>
            As shown in Figure 6, the final output is a static 3D model where segmented areas are immediately
distinguishable (buildings colored red and the road surface dark gray) making it for visual reports
and detailed ofline analysis. It should be noted that the resulting mesh is not perfectly optimal. It
presents holes and extraneous artifacts, which are primarily attributable to the known challenges of
photogrammetric reconstruction in areas with high vegetation density [
            <xref ref-type="bibr" rid="ref14">14</xref>
            ]. Foliage creates occlusions
and complex, non-static surfaces, leading to a point cloud with inherent gaps and inaccuracies that are
inevitably inherited by the generated 3D mesh.
          </p>
        </sec>
        <sec id="sec-4-3-2">
          <title>4.3.2. PrimitiveClassification with CesiumJS</title>
          <p>This second methodology takes a diferent approach, shifting the challenge of data integration from
a pre-processing step to the server-side web environment. This addresses the need for scalable and
interactive visualization, especially in web applications handling large-scale 3D datasets, such as urban
areas or complex infrastructure. This approach leverages CesiumJS, a robust open-source JavaScript
framework for 3D maps, to apply dynamic styling to tiled 3D models generated from the SfM pipeline.
Tiled models are crucial for performance, as they employ a Level of Detail (LOD) mechanism that
optimizes rendering by loading progressively detailed versions based on viewing distance. They enable
eficient data streaming, as only the visible and relevant tiles are requested from the server, significantly
reducing network bandwidth and client-side memory consumption.</p>
          <p>This method, operating on cloud computing, overlays styling information directly at runtime,
inherently without altering the original 3D tiled model. The process begins by exporting the coordinates of
the classified points of interest (from a .las file format) into a standard GeoJSON format. The process
consists of mapping each coordinate from the GeoJSON file to a geometric primitive supported by
Cesium.</p>
          <p>Within the web application, the base 3D tileset is loaded first. Subsequently, the GeoJSON file is
fetched, and its coordinates are used to create visual markers. For each feature, the script instantiates
a Cesium.GeometryInstance in the form of a small ellipsoid (1 cm radius). This small volume acts as
a 3D spatial marker precisely located at the position of the classified point. Each of these instances
is assigned specific visual attributes, such as a semi-transparent color, which can be customized to
represent diferent classes or types of features.</p>
          <p>As shown in Figure 7, the final result is a smooth integration where classified areas are highlighted
directly on the original 3D model. This approach is exceptionally powerful as it does not alter the
source data, allows for toggling diferent classification layers on and of, and leverages the performance
of the tiled model’s Level of Detail (LOD) system, ensuring a fluid and interactive user experience even
with massive datasets.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and future work</title>
      <p>In conclusion, this work presented a structured post-event analysis pipeline, integrating
photogrammetric processing and machine learning semantic segmentation.</p>
      <p>The proposed framework was validated on real-world case study documenting landslide impacts in
Tredozio, Italy, successfully generating classified 3D digital twins from UAV imagery.</p>
      <p>Building upon the initial POCI infrastructure of Turin, the application has been transitioned to the
HaMMon HPC Infrastructure, setting the stage for the next phase of our research. Future eforts will be
dedicated to conducting a comprehensive performance analysis in this new environment, benchmarking
the entire workflow, from data ingestion to model generation, to quantify the performance gains
provided by the new hardware.</p>
      <p>A fundamental contribution of this work is the development of two distinct approaches and
methodologies to link 2D semantic labels and 3D visualization: a direct mesh building from dense point cloud
and an interactive web-base application using CesiumJS classification primitive. We acknowledge
that the current ML results are at an early stage. They are afected by the input images that difer
significantly from the training domain, both in terms of represented objects and image characteristics
such as angle and distance. To address this, future eforts will be dedicated to improving the model’s
robustness by expanding the training dataset to include a wider variety of scenes, objects, and imaging
conditions.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This work is supported by the Spoke 1 “FutureHPC &amp; BigData” and the Spoke 3 “Astrophysics and
Cosmos Observations” of the ICSC – Centro Nazionale di Ricerca in High Performance Computing, Big
Data and Quantum Computing, funded by NextGenerationEU.</p>
      <p>The authors are grateful to the team of Computing and Networking System Service of the Laboratori
Nazionali del Gran Sasso (LNGS - National Institute of Nuclear Physics) for sharing the HPC cluster
resources within the project ICSC - Centro Nazionale di Ricerca in High-Performance Computing,
Big Data and Quantum Computing (Codice Progetto CN00000013 - PNRR Missione 4, Componente 2,
Investimento 1.4- ICSC - CUP I53C21000340006).
During the preparation of this work, the author(s) used GPT-4 in order to: Grammar and spelling check.
After using these tool, the authors reviewed and edited the content as needed and take full responsibility
for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Imbrosciano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Sciacca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Vitello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pelonero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Franchina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Becciani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Colonnelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Medić</surname>
          </string-name>
          ,
          <article-title>The cloud-hpc infrastructure for hazard mapping and vulnerability monitoring (hammon)</article-title>
          ,
          <source>in: 2025 33rd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing (PDP)</source>
          , IEEE,
          <year>2025</year>
          , pp.
          <fpage>309</fpage>
          -
          <lpage>316</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F.</given-names>
            <surname>Agüera-Vega</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Carvajal-Ramírez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Martínez-Carricondo</surname>
          </string-name>
          ,
          <article-title>Accuracy of digital surface models and orthophotos derived from unmanned aerial vehicle photogrammetry</article-title>
          ,
          <source>Journal of Surveying Engineering</source>
          <volume>143</volume>
          (
          <year>2017</year>
          )
          <fpage>04016025</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>F.</given-names>
            <surname>Knuth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Shean</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bhushan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Schwat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Alexandrov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>McNeil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dehecq</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Florentine</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          <article-title>O'Neel, Historical structure from motion (hsfm): Automated processing of historical aerial photographs for long-term topographic change analysis</article-title>
          ,
          <source>Remote Sensing of Environment</source>
          <volume>285</volume>
          (
          <year>2023</year>
          )
          <fpage>113379</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>F. R.</given-names>
            <surname>Vitello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pelonero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Imbrosciano</surname>
          </string-name>
          ,
          <source>UAV-Based Digital Twin Creation: Algorithms for Advanced Processing, Classification, and 3D Model Enrichment</source>
          ,
          <year>2025</year>
          . URL: https://doi.org/10. 15161/oar.it/eb6em-v6t60.
          <source>doi:10</source>
          .15161/oar.it/eb6em-v6t60.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Imbrosciano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Sciacca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pelonero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Vitello</surname>
          </string-name>
          ,
          <source>Semantic Segmentation of UAV Imagery for Post- Disaster Damage Assessment</source>
          ,
          <year>2025</year>
          . URL: https://doi.org/10.15161/oar.it/de5rk-qp426.
          <source>doi:10</source>
          . 15161/oar.it/de5rk-qp426.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Rahnemoonfar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Chowdhury</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. R.</given-names>
            <surname>Murphy</surname>
          </string-name>
          ,
          <article-title>Rescuenet: A high resolution uav semantic segmentation dataset for natural disaster damage assessment</article-title>
          ,
          <source>Scientific Data</source>
          <volume>10</volume>
          (
          <year>2023</year>
          )
          <article-title>913</article-title>
          . URL: https://doi.org/10.1038/s41597-023-02799-4. doi:
          <volume>10</volume>
          .1038/s41597-023-02799-4.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Rahnemoonfar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Chowdhury</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sarkar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Varshney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. R.</given-names>
            <surname>Murphy</surname>
          </string-name>
          ,
          <article-title>Floodnet: A high resolution aerial imagery dataset for post flood scene understanding</article-title>
          ,
          <source>IEEE Access 9</source>
          (
          <year>2021</year>
          )
          <fpage>89644</fpage>
          -
          <lpage>89652</lpage>
          . URL: https://doi.org/10.1109/ACCESS.
          <year>2021</year>
          .
          <volume>3090981</volume>
          . doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2021</year>
          .
          <volume>3090981</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Jégou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Drozdzal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Vazquez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Romero</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y. Bengio,</surname>
          </string-name>
          <article-title>The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation</article-title>
          ,
          <source>in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>1175</fpage>
          -
          <lpage>1183</lpage>
          . URL: https://arxiv.org/abs/1611.09326. doi:
          <volume>10</volume>
          .1109/CVPRW.
          <year>2017</year>
          .
          <volume>156</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>O.</given-names>
            <surname>Oktay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schlemper</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. Le</given-names>
            <surname>Folgoc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Heinrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Misawa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Mori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>McDonagh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. Y.</given-names>
            <surname>Hammerla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kainz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Glocker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Rueckert</surname>
          </string-name>
          ,
          <article-title>Attention u-net: Learning where to look for the pancreas</article-title>
          , arXiv preprint arXiv:
          <year>1804</year>
          .
          <volume>03999</volume>
          (
          <year>2018</year>
          ). doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>1804</year>
          .
          <volume>03999</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>L.</given-names>
            <surname>Pelonero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Imbrosciano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Sciacca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. R.</given-names>
            <surname>Vitello</surname>
          </string-name>
          ,
          <source>Parallel Photogrammetry with Metashape on Kubernetes</source>
          ,
          <year>2025</year>
          . URL: https://doi.org/10.15161/oar.it/5prtw-
          <fpage>4ca62</fpage>
          . doi:
          <volume>10</volume>
          .15161/oar.it/ 5prtw-
          <fpage>4ca62</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>N.</given-names>
            <surname>Casagli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Frodella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Morelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Tofani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ciampalini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Intrieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Raspini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Rossi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Tanteri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <article-title>Spaceborne, uav and ground-based remote sensing techniques for landslide mapping, monitoring and early warning</article-title>
          ,
          <source>Geoenvironmental Disasters</source>
          <volume>4</volume>
          (
          <year>2017</year>
          )
          <article-title>9</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kucharczyk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. H.</given-names>
            <surname>Hugenholtz</surname>
          </string-name>
          ,
          <article-title>Remote sensing of natural hazard-related disasters with small drones: Global trends, biases</article-title>
          , and research opportunities,
          <source>Remote Sensing of Environment</source>
          <volume>264</volume>
          (
          <year>2021</year>
          )
          <fpage>112577</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>F.</given-names>
            <surname>Zhuang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Qi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Duan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Xi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Xiong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <article-title>A comprehensive survey on transfer learning</article-title>
          ,
          <source>Proceedings of the IEEE</source>
          <volume>109</volume>
          (
          <year>2020</year>
          )
          <fpage>43</fpage>
          -
          <lpage>76</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>McCullough</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. B.</given-names>
            <surname>Prasad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>McAlinden</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Soibelman</surname>
          </string-name>
          ,
          <article-title>Semantic segmentation and data fusion of microsoft bing 3d cities and small uav-based photogrammetric data</article-title>
          , arXiv preprint arXiv:
          <year>2008</year>
          .
          <volume>09648</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>