<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Application of Semantic Segmentation of Clouds of Points for Preservation of Cultural Heritage</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Nataliya</forename><surname>Boyko</surname></persName>
							<email>nataliya.i.boyko@lpnu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<addrLine>Profesorska Street 1</addrLine>
									<postCode>79013</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Mariia</forename><surname>Rizhko</surname></persName>
							<email>mariia.rizhko.knm.2018@lpnu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<addrLine>Profesorska Street 1</addrLine>
									<postCode>79013</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Application of Semantic Segmentation of Clouds of Points for Preservation of Cultural Heritage</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">821DDD869691E4F2E3E63CAF7F1973AA</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-19T16:26+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Artificial Intelligence</term>
					<term>Point Cloud</term>
					<term>Semantic Segmentation</term>
					<term>Monitoring</term>
					<term>Cultural Heritage</term>
					<term>Risk-Informed Systems</term>
					<term>Information Technologies</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Artificial intelligence is evolving and emerging in many new areas, but a literature analysis suggests that 3D and AI-based technologies to monitor cultural heritage have not been studied enough. Cultural heritage requires non-stop detailed observation and protection as buildings become older and ruin through time. This process is critical and requires much time, financial and human resources. Since it is not always possible to provide these resources, Computational and Information Technologies are needed to build a risk-informed system that will analyze and notify about cultural heritage changes in a timely manner. Therefore, the contribution of this document is potentially essential for this area.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Cultural heritage plays a vital role in preserving the memory and knowledge of the past. Moreover, its preservation is essential in developing modern infrastructure, constructing new cities, roads, and railways. At the same time, we must not forget about the development of tourist services, an adaptation of old buildings to modern needs, illegal archeological excavations, and other potential risks related to destroying cultural heritage.</p><p>Preserving cultural heritage has three main risks. First of all, it is a time-consuming process that is required to be done repeatedly. If it is impossible to do so, cultural heritage will get damaged and require renovation, or in some cases, it can even be lost forever. The second concern is financial resources. Non-stop monitoring takes much time and human resources; thus, it takes much money. No monitoring results in cultural heritage damage, and restoration takes even more money. The last but not least risk are people working with cultural heritage. They do a monotonous job checking cultural heritage for destruction. Instead, they could spend time on research and renovation tasks.</p><p>Nowadays, cultural heritage monitoring is managed by cultural organizations, which are constantly confronted with a large amount of data that needs to be processed and small resources that they can use. The solution is to create a risk-informed system to automate data monitoring and analysis based on 3D and AI technologies. By automating the processes in collecting and analyzing information, it is possible to achieve significant cost savings, both human and financial.</p><p>The work aims to systematize approaches to directions and technologies of 3D and AI technologies to analyze and recognize cultural heritage, developing a system for practical application.</p><p>The solution of the following tasks is required:</p><p>1. Review of existing 3D and AI solutions for monitoring and analysis of cultural heritage preservation. 2. Research of requirements, methods, and algorithms to get a decision for the task. 3. Select and collect the necessary data of cultural heritage to be analyzed. 4. Development of architecture for monitoring and analysis of cultural heritage preservation. 5. Creating an application program -a system of semantic segmentation for cultural heritage.</p><p>The study's practical importance is to create a new risk-informed system for functional, objective, and cost-effective monitoring of cultural heritage changes that will monitor many facilities and act quickly and promptly.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Works</head><p>Risk analysis is one of the essential tools for preserving cultural heritage. It is used for decisionmaking in the process of cultural heritage asset management and maintenance. For this purpose, both quantitive and qualitative analysis is used <ref type="bibr" target="#b20">[21]</ref>.</p><p>Although risk categorization plays a vital role in risk management in other disciplines, it has yet to be successfully applied to cultural heritage studies <ref type="bibr" target="#b21">[22]</ref>.</p><p>The importance of information searching and systematizing in the modern world has led to the thematic modeling of text document collections in this study. Therefore, thematic models are used to identify trends in scientific publications or news streams for classifying and categorizing image documents and video streams, information retrieval, including multilingual, tagging web pages, detecting spam, recommendation, and other applications.</p><p>3D scanning -building a computer model of a material object. It is studied by many researchers <ref type="bibr" target="#b4">[5]</ref><ref type="bibr" target="#b5">[6]</ref><ref type="bibr" target="#b6">[7]</ref><ref type="bibr" target="#b7">[8]</ref>. Currently, there are two leading technologies of 3D scanning -laser scanning and photogrammetry.</p><p>Laser scanning is a technology for obtaining information about terrain and objects using a laser. This method has been studied by scientists <ref type="bibr" target="#b4">[5]</ref><ref type="bibr" target="#b5">[6]</ref><ref type="bibr" target="#b6">[7]</ref><ref type="bibr" target="#b7">[8]</ref>. The result of the laser scan is a cloud of laser reflection points.</p><p>There are two types of laser scanning: mobile and stationary. During mobile scanning, continuous measurement is performed while the vehicle is driving. The device is established motionlessly during stationary scanning, and measurement is carried out from several standing points.</p><p>Photogrammetry is a science that studies the appearances, shapes, and positions of various objects in space, objects, and their shapes by measuring their photographic image. It was studied by researchers <ref type="bibr" target="#b4">[5]</ref><ref type="bibr" target="#b5">[6]</ref><ref type="bibr" target="#b6">[7]</ref><ref type="bibr" target="#b7">[8]</ref>.</p><p>No special devices are required to use this method. It is enough to have a camera on a modern phone.</p><p>When choosing photogrammetry for 3D scanning, an important question is what and how affects the accuracy of 3D models. Accuracy can depend on many factors <ref type="bibr" target="#b0">[1]</ref>: optical and digital camera characteristics, spatial distribution, and ground control points.</p><p>Also, shooting with the help of remote control systems, such as drones, has been studied. This method has an advantage over standard photogrammetry due to the aerial view <ref type="bibr" target="#b1">[2]</ref>.</p><p>In recent years, interest in preserving cultural heritage has begun to grow, so more and more data is being digitized, which is very important for artificial intelligence, as the training of models is based on data. However, collecting a large enough amount of data is still a problem because it is time-consuming and requires a human factor to mark the correct elements.</p><p>Machine learning technologies have become popular not only in computer science but in other fields as well. One of the reasons for the growth, in particular, is the successful application of deep learning methods for image classification <ref type="bibr" target="#b2">[3]</ref>, in which convolutional neural networks (CNN) exceed the human ability to analyze objects <ref type="bibr" target="#b3">[4]</ref>.</p><p>The potential of deep learning technologies for three-dimensional image analysis achieved a remarkable breakthrough in 2012 when the AlexNet model <ref type="bibr" target="#b4">[5]</ref> showed excellent analysis results during the ImageNet competition. In 2014, GoogLeNet <ref type="bibr" target="#b5">[6]</ref> won the ImageNet competition, achieving 93.3% accuracy of semantic segmentation. In 2016, Microsoft Networks ResNet <ref type="bibr" target="#b6">[7]</ref> won the ImageNet competition, achieving 96.4% accuracy.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Materials and Methods</head><p>Learning on point clouds is attracting more and more attention with the development of augmented and virtual reality, their wide application in computer vision, autopilot, robot development. Deep learning is well researched to solve 2D problems, but for 3D cultural heritage data, it is only evolving and needs further research and the creation of new datasets to train effective models.</p><p>In this paper, 3D data are presented in point clouds, and the task is their semantic segmentation. For this purpose, the rights to use the ArCH dataset (Architectural Cultural Heritage point clouds for classification and semantic segmentation) were obtained. <ref type="bibr" target="#b7">[8]</ref> The dataset consists of 17 annotated scenes, each point of which belongs to one of 9 classes: "arch": 0, "column": 1, "moldings": 2, "floor": 3, "door_window": 4, "wall": 5, "stairs": 6, "vault": 7, "roof": 8, "other": 9. Some of these scenes belong to the UNESCO heritage. Others are part of the historical heritage and represent different historical periods and architectural styles.</p><p>Fifteen scenes are used for training and two for testing models. Scenes for training include churches, chapels, porticos, loggias, pavilions, and monasteries. Two test scenes have different characteristics. The first represents a simple, almost symmetrical one-level building with standard and repetitive geometric elements. The second scene represents a complex, asymmetrical building with two levels, shot both inside and outside, with different types of vaults, stairs, and windows.       Point-based Networks are used for the segmentation (Fig. <ref type="figure">4</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 4: Point-based methods for clouds of points</head><p>The semantic segmentation task is to divide a cloud of points into parts according to the semantic meaning of the points. This section will describe the best semantic segmentation techniques nowadays: Point-wise MLP on PointNet <ref type="bibr" target="#b8">[9]</ref>, PointNet ++ <ref type="bibr" target="#b9">[10]</ref> and RandLA-Net <ref type="bibr" target="#b10">[11]</ref>, Point Convolution on PointCNN <ref type="bibr" target="#b11">[12]</ref>, RNN-based on RSNet <ref type="bibr" target="#b12">[13]</ref>, Graph-based on DGCNN <ref type="bibr" target="#b13">[14]</ref>.</p><p>Pointwise MLP methods typically use common MLP (Multi-Layer Perceptron) as the central unit in their network for its high efficiency. However, point functions obtained with MLP cannot cover local geometry in point clouds and interactions between points. Therefore, various methods have been proposed, including PointNet, PointNet ++, and RandLA-Net, to provide more context for each point and explore deeper local structures.</p><p>Convolutional networks require highly structured data to obtain scales and other optimizations. Because the point cloud is not standard, the data must be transformed into a voxel grid or image collection before transmitting it for learning. However, this transformation makes the obtained data excessively voluminous and can also change the nature of the data. That is why PointNet accepts a cloud of points without transformations.</p><p>The PointNet architecture (Fig. <ref type="figure" target="#fig_4">5</ref>) consists of three main modules: a max-pooling layer as a symmetric function for aggregating information from all points, a structure for combining local and global information, two networks for combining entry points and point features. The idea of this model is to approximate the general function defined on the set of points through the application of a symmetric function on the transformed elements in the network (Formula 1):</p><formula xml:id="formula_0">{ } )), ( ),..., 1 ( ( ) ,..., 1 ( n x h x h g n x x f ≈<label>(1)</label></formula><p>where</p><formula xml:id="formula_1">R R g R h R K K f → × × → → ... R : , R : , 2 : K N R N</formula><p>-is a symmetric function. H is approximated through the MLP network and g through the composition by a function of one variable and max pooling function.</p><p>PointNet does not cover local structures because of the metric space in which the points are located, limiting the ability to recognize small patterns and generalize complex scenes. PointNet ++ is a hierarchical neural network that applies PointNet recursively to a set of entry points. Using distances, PointNet ++ can study local features with increasing contextual scale (Fig. <ref type="figure" target="#fig_5">6</ref>).  </p><formula xml:id="formula_2">), , ( W k i f g k i s = (3)</formula><p>where W -is MLP weights Weighted Summation:</p><formula xml:id="formula_3">∑ = ⋅ = K k k i s k i f i f 1 ˆ.</formula><p>Point Convolution uses spatial-local correlation in data presented densely in grids and provides a basis for studying features from point clouds. One example of such an architecture is PointCNN (Figure <ref type="figure" target="#fig_7">8</ref>).  Most other semantic networks do not model the necessary relationships between point clouds. RNN-based models will be presented on the example of RSNet (Fig. <ref type="figure">9</ref>). A key component of the RSNet architecture is a highly efficient module of local dependence between points. RSNet takes as input clouds of not preprocessed points and returns semantic labels for each of them.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 9: RSNet diagram</head><p>Input and output feature blocks are used to generate features independently. In the middle of them, the local dependency module is located. The input function block receives entry points and generates attributes, and the output blocks receive processed input attributes and return final predictions for each point. Both blocks use a sequence of multiple layers to create independent representations of the features for each point. The local dependency module combines an aggregation layer, a bidirectional recurrent neural network (RNN) layer, and a separation layer. The problem of the local context is solved first by projecting disordered points on ordered features and then applying traditional learning algorithms.</p><p>Graph-based networks are used to capture the shapes and geometric structures of threedimensional point clouds. First, a point cloud is represented as a set of simple interconnected shapes and super points, then a graph of super points is used to capture the structure and context of information. After that, the large-scale problem of cloud point segmentation is divided into three subtasks: geometrically homogeneous distribution, the embedding of super points, and context segmentation.</p><p>One example of a Graph-based architecture is DGCNN (Figure <ref type="figure" target="#fig_9">10</ref>). DGCNN is an EdgeConv that is suitable for CNN complex point cloud tasks, including classification and segmentation.</p><p>EdgeConv operates on graphs that are dynamically computed at each level of the network. It covers the local geometric structure, preserving the invariance of the permutation. Instead of generating point features directly from their embeddings, EdgeConv generates edge features that describe the relationships between a point and its neighbors. EdgeConv is designed to be invariant to the ordering of neighbors and therefore is an invariant of permutation. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experiments</head><p>The results of the analyzed methods in the previous section are compared on the datasets S3DIS <ref type="bibr" target="#b14">[15]</ref>, ScanNet <ref type="bibr" target="#b15">[16]</ref>, Semantic3D <ref type="bibr" target="#b16">[17]</ref>, and SemanticKITTI <ref type="bibr" target="#b17">[18]</ref>. For this purpose, mean class accuracy (mAcc), overall accuracy (oAcc), and mean class intersection over union (mIoU) metrics are used. The data are taken from the articles of the corresponding algorithms and datasets. S3DIS: all point clouds are obtained without manual intervention using a Matterport scanner. Dataset consists of 271 rooms, which belong to 6 large-scale internal scenes from 3 different buildings, with 6020 sq.m. These areas mainly include offices, training and exhibition spaces, conference rooms.</p><p>ScanNet: annotations contain expected calibration parameters, camera positions, threedimensional surface reconstructions, textured grids, dense object-level semantic segmentation, CAD models. The dataset contains annotated RGB-D environment scans. In total, there are 2.5M images in 1513 scans obtained in 707 different locations.</p><p>Semantic3D: includes about 4 billion 3D points obtained using static ground-based laser scanners, covering up to 160x240x30 meters of space. Point clouds belong to 8 classes (urban and rural) and contain coordinates, RGB information, and intensity.</p><p>SemanticKITTI: an extensive outdoor dataset containing a detailed point annotation of 28 classes. The dataset contains labels for the whole horizontal 360-degree field of view of the rotating laser sensor. Table <ref type="table" target="#tab_3">3</ref> shows the results achieved in the original articles of methods and datasets. As can be seen from the results presented in this table, the best metrics in the RandLA-Net model are on the Semantic3D and SemanticKITTI datasets, in the PointCNN model -on the S3DIS, ScanNet datasets. However, there are quite a few unknown results on various datasets. Accordingly, we can assume that the RandLA-Net or PointCNN models will work best on the ArCH dataset. However, due to the omitted values, it may turn out that the other models will still be better than the previous two.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Results</head><p>The experiments part demonstrates semantic segmentation models' performance on S3DIS, ScanNet, Semantic3D, and SemanticKITTI datasets. For the Semantic3D dataset, the best model is RandLA-Net, which shows high results with oAcc = 94.8 and mIoU = 77.4. In the two previous datasets, the maximum value of oAcc was 85.1 and mIoU 57.26.</p><p>The SemanticKITTI dataset is also poorly researched. The mAcc metric is shown for PointNet only and is 29.9. The mIoU metric is presented for PointNet and RandLA-Net and is 17.9 and 53.9, respectively.</p><p>Therefore, if we compare the presented methods PointNet, PointNet ++ and RandLA-Net, PointCNN, RSNet, and DGCNN for the S3DIS, ScanNet, Semantic3D, and SemanticKITTI datasets, we can get the following conclusions. Further research is needed on the methods presented on the relevant datasets, as not all possible options have been considered in the official articles in which the models and datasets were first presented. Further research should be performed on the same metrics, finding mAcc, oAcc, and mIoU in all combinations of models and datasets. After the same metrics are received, it will be possible to compare which datasets on which models work best. The last step will be to check the models presented on the ArCH dataset against the same metrics mAcc, oAcc, and mIoU.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion</head><p>The protection of cultural heritage in the urbanization epoch and city development time is critical for preserving history. However, this is quite an enormous task with many risks connected to time, financial, and human resources. Therefore, a solution for automating the monitoring and analysis of data by creating semantic segmentation of point clouds was presented. A risk-informed system based on computational and information technologies will reduce risks and increase the efficiency of using these resources.</p><p>The existing solutions were considered, the methods and datasets that correspond to the goal were analyzed, and their results on different metrics were collected and analyzed.</p><p>The following steps in continuing this study will be: conducting experiments on the presented methods for the respective datasets; comparison of experimental results on the same metrics; verification of the presented methods on the ArCH dataset.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Photography, 3D color image, and semantic segmentation of a training object</figDesc><graphic coords="4,223.55,226.30,156.97,144.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Photography, 3D color image, and semantic segmentation of the first testing object</figDesc><graphic coords="4,251.50,464.53,192.05,72.95" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Photography, 3D color image, and semantic segmentation of the second testing object</figDesc><graphic coords="5,193.75,217.65,173.85,115.65" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figures 1 -</head><label>1</label><figDesc>Figures 1-3 show a visualization of one of the training objects and two objects used to test the quality of the models. Objects are represented as clouds of points with corresponding r / g / b values for each point to indicate color and class. Data were obtained using various sensors (cameras, scanners) and platforms (UAV and others). Preprocessing included spatial translation, subsampling, and feature selection.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: PointNet architecture</figDesc><graphic coords="7,131.30,295.83,356.20,167.35" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: PointNet++ architecture RandLA-Net is a lightweight neural architecture (Figure 7) that can handle large-scale point clouds 200 times faster than other architectures, as most existing architectures use time-consuming preprocessing and post-processing techniques. PointNet is computationally efficient but does not capture the contextual information of each point. RandLA-Net handles large 3D point clouds in one pass without requiring any pre/post-processing steps, such as voxelization, block separation, or graphing. RandLA-Net relies only on random sampling within the network and therefore requires much less memory and computation.</figDesc><graphic coords="8,135.08,348.22,348.65,171.80" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: RandLA-Net architecture The first step, Local Spatial Encoding (LocSE), is finding adjacent points. For each point, its neighboring points are searched for by a simple K-nearest neighbors (KNN) algorithm based on the point-wise Euclidean distance. The next step is Relative Point Position Encoding. For each nearest K point { } K i p k i p i p ... ... 1 of</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 8 :</head><label>8</label><figDesc>Figure 8: PointCNN architecture for classification (a i b) and segmentation (c) PointCNN learns the transformation of input points to weigh the input features associated with the points and rearrange the points in the canonical order. The PointCNN architecture contains two designs: Hierarchical Convolution and χ-Conv Operator.Hierarchical Convolution is recursively applied to local parts of the grid, often reducing data to fewer representative points but with more saturated information.The χ-Conv operator works in local parts, accepts connected points as input data, and makes convolution. Neighboring points are transformed into local coordinate systems of representative points, and later these local coordinates are individually combined with the corresponding features (Formula 4).</figDesc><graphic coords="9,135.08,336.34,348.65,192.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head></head><label></label><figDesc>where σ MLP is used separately for each point, as in PointNet.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_9"><head>Figure 10 :</head><label>10</label><figDesc>Figure 10: DGCNN architecture Because EdgeConv creates a local graph and learns embeddings for edges, the model can group points in Euclidean and semantic space. Instead of working on individual points, as in PointNet, DGCNN uses local geometric structures to construct a local graph of adjacent points and apply operations on the edges connecting adjacent pairs of points.</figDesc><graphic coords="11,137.65,144.04,342.95,213.45" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="6,141.40,471.15,335.70,169.24" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc></figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2</head><label>2</label><figDesc>Information about testing objects</figDesc><table><row><cell>Name</cell><cell>Number of points</cell><cell>Scene</cell><cell>Getting data</cell><cell>Class number</cell><cell>Subsampling (cm)</cell></row><row><cell>A_SMG_portico</cell><cell>17,798,012</cell><cell>Outdoor</cell><cell>TLS + UAV</cell><cell>9/9</cell><cell>1</cell></row><row><cell>B_SMV_chapel_27t</cell><cell>16,200,442</cell><cell cols="2">Indoor/Outdoor TLS + UAV</cell><cell>9/9</cell><cell>1</cell></row><row><cell>o35</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table><note>Tables1 and 2provide more information about training and testing objects. The total number of points for training is 102,139,969, for testing 33,998,454.</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 3</head><label>3</label><figDesc></figDesc><table><row><cell cols="9">Effectiveness evaluation of semantic segmentation models on S3DIS, ScanNet, Semantic3D, and</cell></row><row><cell cols="2">SemanticKITTI datasets</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell cols="2">S3DIS</cell><cell cols="2">ScanNet</cell><cell cols="2">Semantic3D</cell><cell cols="2">SemanticKITTI</cell></row><row><cell></cell><cell>mAcc</cell><cell>mIoU</cell><cell>oAcc</cell><cell>mIoU</cell><cell>oAcc</cell><cell>mIoU</cell><cell>mAcc</cell><cell>mIoU</cell></row><row><cell cols="2">PointNet 48.98</cell><cell>41.09</cell><cell>-</cell><cell>14.69</cell><cell>-</cell><cell>-</cell><cell>29.9</cell><cell>17.9</cell></row><row><cell cols="2">PointNet++ -</cell><cell>50.04</cell><cell>71.4</cell><cell>34.26</cell><cell>-</cell><cell>-</cell><cell>-</cell><cell>-</cell></row><row><cell cols="2">RandLA-Net -</cell><cell>-</cell><cell>-</cell><cell>-</cell><cell>94.8</cell><cell>77.4</cell><cell>-</cell><cell>53.9</cell></row><row><cell cols="2">PointCNN 63.86</cell><cell>57.26</cell><cell>85.1</cell><cell>45.8</cell><cell>-</cell><cell>-</cell><cell>-</cell><cell>-</cell></row><row><cell>RSNet</cell><cell>59.42</cell><cell>56.5</cell><cell>-</cell><cell>39.35</cell><cell>-</cell><cell>-</cell><cell>-</cell><cell>-</cell></row><row><cell>DGCNN</cell><cell>-</cell><cell>56.1</cell><cell>-</cell><cell>-</cell><cell>-</cell><cell>-</cell><cell>-</cell><cell>-</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head></head><label></label><figDesc>The presented models were of different types: PointNet, PointNet ++, and RandLA-Net are Point-wise MLP models, PointCNN is Point Convolution, RSNet is RNN-based, DGCNN is Graph-based. Furthermore, these methods were tested on different types of environment: offices, training and exhibition spaces, conference rooms, cities and towns, open and closed space. The results were shown in Table 1. They show that different models were better on different datasets for different metrics. So for the S3DIS dataset, the best model is PointCNN with mAcc equal to 63.86 while PointNet has mAcc equal to 48.98, RSNet -59.42. Also, PointCNN shows the best results on mIoU metric, which is equal to 57.26, while it is 41.09 in PointNet, 50.04 in PointNet++, 56.5 in RSNet, and 56.1 in DGCNN. For the ScanNet dataset, the best model is also PointCNN with oAcc equal to 85.1, while it is 71.4 in PointNet++. Also, PointCNN shows the best results on the mIoU metric, which is equal to 45.8 as opposed to 14.69 in PointNet, 34.26 in PointNet ++, 39.35 in RSNet.</figDesc><table /></figure>
		</body>
		<back>

			<div type="funding">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The study examines ArCH datasets and best techniques for segmenting 3D point clouds: Point-wise MLP with PointNet, PointNet ++ and RandLA-Net, Point Convolution with PointCNN, RNN-based with RSNet, Graph-based with DGCNN. The paper examines the efficiency of the semantic segmentation models PointNet, PointNet ++, RandLA-Net, PointCNN, RSNet, DGCNN on S3DIS, ScanNet, Semantic3D, and SemanticKITTI datasets. The efficiency of semantic segmentation models PointNet, PointNet ++, RandLA-Net, PointCNN, RSNet, DGCNN on S3DIS, ScanNet, Semantic3D, and SemanticKITTI datasets are compared.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Accuracy of cultural heritage 3Dmodels by RPAS and terrestrial photogrammetry</title>
		<author>
			<persName><forename type="first">M</forename><surname>Bolognesi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Furini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Russo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Pellegrinelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Russo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. ISPRS Arch</title>
		<imprint>
			<biblScope unit="volume">40</biblScope>
			<biblScope unit="page" from="113" to="119" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">UAV for mapping historic buildings: From 3D modeling to BIM</title>
		<author>
			<persName><forename type="first">E</forename><surname>Karachaliou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Georgiou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Psaltis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Stylianidis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci</title>
		<imprint>
			<biblScope unit="volume">XLII</biblScope>
			<biblScope unit="page" from="397" to="402" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Imagenet classification with deep convolutional neural networks</title>
		<author>
			<persName><forename type="first">A</forename><surname>Krizhevsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Sutskever</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Hinton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">NIPS</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="1097" to="1105" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">CNN image retrieval learns from BoW: Unsupervised finetuning with hard examples</title>
		<author>
			<persName><forename type="first">F</forename><surname>Radenovi´c</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Tolias</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Chum</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Eur. Conf. Comput. Vis. ECCV</title>
				<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1" to="17" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Imagenet classification with deep convolutional neural networks</title>
		<author>
			<persName><forename type="first">A</forename><surname>Krizhevsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Sutskever</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Hinton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">NIPS</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="1097" to="1105" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<author>
			<persName><forename type="first">C</forename><surname>Szegedy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Jia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Ermanet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Reed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Anguelov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Erhan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Vanhoucke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rabinovich</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Going Deeper with Convolutions</title>
				<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="1" to="9" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><forename type="first">K</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sun</surname></persName>
		</author>
		<title level="m">Deep Residual Learning for Image Recognition</title>
				<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="1" to="12" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A benchmark for large-scale heritage point cloud semantic segmentation</title>
		<author>
			<persName><forename type="first">F</forename><surname>Matrone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lingua</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Pierdicca</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">S</forename><surname>Malinverni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Paolanti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Grilli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Remondino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Murtiyoso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Landes</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B2</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="1419" to="1426" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Deep learning on point sets for 3d classification and segmentation</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">Q</forename><surname>Charles</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Hao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kaichun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">G</forename><surname>Leonidas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE conference on computer vision and pattern recognition</title>
				<meeting>the IEEE conference on computer vision and pattern recognition</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="652" to="660" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Pointnet++: Deep hierarchical feature learning on point sets in a metric space</title>
		<author>
			<persName><forename type="middle">R</forename><surname>Ch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Qi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Yi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">G</forename><surname>Su</surname></persName>
		</author>
		<author>
			<persName><surname>Leonidas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in neural information processing systems</title>
		<imprint>
			<biblScope unit="volume">30</biblScope>
			<biblScope unit="page" from="5099" to="5108" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">RandLA-Net: Efficient semantic segmentation of large-scale point clouds</title>
		<author>
			<persName><forename type="first">Q</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Xie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Rosa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Trigoni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Markham</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</title>
				<meeting>the IEEE/CVF Conference on Computer Vision and Pattern Recognition</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="11108" to="11117" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Pointcnn: Convolution on x-transformed points</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Bu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Di</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Chen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in neural information processing systems</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="page" from="820" to="830" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Recurrent slice networks for 3d segmentation of point clouds</title>
		<author>
			<persName><forename type="first">Q</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Neumann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</title>
				<meeting>the IEEE Conference on Computer Vision and Pattern Recognition</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="2626" to="2635" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Dynamic graph cnn for learning on point clouds</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">E</forename><surname>Sarma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bronstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Justin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Acm Transactions On Graphics (tog)</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="1" to="12" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">3d semantic parsing of large-scale indoor spaces</title>
		<author>
			<persName><forename type="first">I</forename><surname>Armeni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Sener</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">R</forename><surname>Zamir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Brilakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Fischer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Savarese</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</title>
				<meeting>the IEEE Conference on Computer Vision and Pattern Recognition</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1534" to="1543" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Scannet: Richly-annotated 3d reconstructions of indoor scenes</title>
		<author>
			<persName><forename type="first">A</forename><surname>Dai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Savva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Halber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Funkhouser</surname></persName>
		</author>
		<author>
			<persName><surname>Nießner</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</title>
				<meeting>the IEEE Conference on Computer Vision and Pattern Recognition</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="5828" to="5839" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<author>
			<persName><forename type="first">T</forename><surname>Hackel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Savinov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ladicky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Wegner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Schindler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pollefeys</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1704.03847</idno>
		<title level="m">Semantic3d. net: A new large-scale point cloud classification benchmark</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">SemanticKITTI: A dataset for semantic scene understanding of lidar sequences</title>
		<author>
			<persName><forename type="first">J</forename><surname>Behley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Garbade</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Milioto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Quenzel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Behnke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Stachniss</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gall</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE International Conference on Computer Vision. IEEE</title>
				<meeting>the IEEE International Conference on Computer Vision. IEEE</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="9297" to="9307" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">The issue of access sharing to data when building enterprise information model</title>
		<author>
			<persName><forename type="first">N</forename><surname>Boiko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IX International Scientific and Technical conference, Computer science and information technologies (CSIT 2014)</title>
				<meeting><address><addrLine>Lviv, Ukraine</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="23" to="24" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Application of Machine Algorithms for Classification and Formation of the Optimal Plan</title>
		<author>
			<persName><forename type="first">N</forename><surname>Boyko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Hlynka</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 5th International Conference on Computational Linguistics and Intelligent Systems (COLINS 2021)</title>
		<title level="s">Main Conference</title>
		<meeting>the 5th International Conference on Computational Linguistics and Intelligent Systems (COLINS 2021)<address><addrLine>Lviv, Ukraine</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">April 22-23, 2021</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="1853" to="1865" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Risks and resilience of cultural heritage assets</title>
		<author>
			<persName><forename type="first">V</forename><surname>Rajcic</surname></persName>
		</author>
		<ptr target="https://www.researchgate.net/publication/299395298_Risks_and_resilience_of_cultural_heritage_assets" />
	</analytic>
	<monogr>
		<title level="m">International Conference: Europe and the Mediterranean: Towards a Sustainable Built Environment At: Malta</title>
				<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">1</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<title level="m" type="main">Risk Characterization for Preserving Cultural Heritage Assets</title>
		<author>
			<persName><forename type="first">R</forename><surname>Sharifi</surname></persName>
		</author>
		<ptr target="https://www.chnt.at/wp-content/uploads/eBook_CHNT22_Sharifi.pdf" />
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
