<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Saliency-driven 3D Reconstruction and Printing for Accessible Museums</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Cristiana</forename><surname>Sofica</surname></persName>
							<email>cristianasimona.sofica@unipd.it</email>
							<affiliation key="aff0">
								<orgName type="institution">Università degli Studi di Padova</orgName>
								<address>
									<addrLine>1, Lungargine del Piovego</addrLine>
									<settlement>Padova</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Elisa</forename><surname>Vargiu</surname></persName>
							<email>elisa.vargiu@studenti.unipd.it</email>
							<affiliation key="aff0">
								<orgName type="institution">Università degli Studi di Padova</orgName>
								<address>
									<addrLine>1, Lungargine del Piovego</addrLine>
									<settlement>Padova</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Mara</forename><surname>Pistellato</surname></persName>
							<email>mara.pistellato@unive.it</email>
							<affiliation key="aff1">
								<orgName type="department">DAIS</orgName>
								<orgName type="institution">Università Ca&apos;Foscari di Venezia</orgName>
								<address>
									<addrLine>155 via Torino</addrLine>
									<settlement>Venezia</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Lucia</forename><surname>Lionello</surname></persName>
							<email>lucia.lionello@unipd.it</email>
							<affiliation key="aff0">
								<orgName type="institution">Università degli Studi di Padova</orgName>
								<address>
									<addrLine>1, Lungargine del Piovego</addrLine>
									<settlement>Padova</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Gianmaria</forename><surname>Concheri</surname></persName>
							<email>gianmaria.concheri@unipd.it</email>
							<affiliation key="aff0">
								<orgName type="institution">Università degli Studi di Padova</orgName>
								<address>
									<addrLine>1, Lungargine del Piovego</addrLine>
									<settlement>Padova</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Saliency-driven 3D Reconstruction and Printing for Accessible Museums</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">B98E1991EA170D070FCAD5969C8754D4</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:11+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Cultural heritage</term>
					<term>3D reconstruction</term>
					<term>3D printing</term>
					<term>Fixation prediction</term>
					<term>Accessibility</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Three-dimensional acquisition and reproduction technologies are often exploited in cultural heritage field for a variety of applications such as conservation, restoration, and dissemination. Another valuable use of 3D data is to make exhibitions more accessible to visitors with impairments, allowing them to fully experience and enjoy the acquired objects. In this short paper, we explore the accessibility inherently provided by 3D representations of real-world objects, with a particular focus on the quality of the models and 3D printing, as well as the presentation aspects. To this end, we propose to apply a state-of-the-art saliency-driven process, generating a fixation map that identifies the object's salient areas that need to be reproduced with a higher definition during the 3D printing to improve the object accessibility. We present a case-study involving the full process of 3D scanning and printing the Coats of Arms in Palazzo Bo (Padova, Italy) to make them accessible to visitors with visual impairment. We employed different scanning techniques and applied the attention mechanism on acquired data to obtain the object salient areas and drive the printing process accordingly. Preliminary tests involving some participant feedback reveal that printing the objects with a variable detail level allows the visitors to have a better understanding of the object as a whole and to appreciate the relevant details.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>In recent years, advancements in digital technology revolutionized the way we document, preserve, and share cultural artefacts. Beyond any doubt, one of these tools is 3D reconstruction, comprising a vast set of methods for acquiring objects, from coins <ref type="bibr" target="#b0">[1]</ref> to entire cities <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3]</ref>. Such methods are largely employed in the cultural heritage domain for preservation <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5]</ref>, analysis <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b7">8]</ref>, restoration <ref type="bibr" target="#b8">[9]</ref> or dissemination such as virtual tours <ref type="bibr" target="#b9">[10]</ref> or interactive visualisations <ref type="bibr" target="#b10">[11]</ref>. Nowadays art and culture need to be accessible to everyone: an additional application of 3D reconstruction is to enhance accessibility of heritage objects for everyone. This can be done for example by making available digital content to users <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b12">13]</ref> or providing access to remote sites that are not easily reachable (e.g. underwater locations <ref type="bibr" target="#b13">[14]</ref>). Another crucial part of inclusiveness focuses on individuals with disabilities <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b15">16]</ref>. This is often declined not only as producing the content itself or ensuring physical accessibility, but also in actively offering the same experience to people with imparities. Also in this case, technology offers a valid set of tools to implement this applications <ref type="bibr" target="#b16">[17]</ref>, enhancing the accessibility for a wide range of visitors categories. In this work we aim at embedding computer vision techniques directly in the 3D reconstruction and printing processes, with the final goal of adapting state-of-the-art saliency models to drive the printing process and enhance the experience of visually impaired people. This is carried out by exploiting the well-known set of techniques falling under the term of saliency detection and fixation prediction. people looking at the same subject. Additionally, we present a case-study involving the 3D scanning and printing of the Coats of Arms on display at Palazzo Bo (Padova, Italy). We scanned six objects and 3D printed them following the fixation map derived from the projection of the acquired surfaces. The main goal of the project is to create a reduced tactile "Coats of Arms wall", so that their significance and meaning are available for visually impaired people. A preliminary study including feedbacks from blind people shows the feasibility and the potential value of the project.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Works</head><p>Accessibility for visually impaired individuals refers to the design of services, environments, and technologies that enable people with visual impairments to participate in society. Ensuring access to cultural heritage to visually impaired individuals can be implemented in several ways, for instance providing audio descriptions <ref type="bibr" target="#b17">[18]</ref>, accessible digital content with specific applications and technologies <ref type="bibr" target="#b18">[19]</ref> or with tactile models. In <ref type="bibr" target="#b19">[20]</ref> the authors propose a ring-like device to use while exploring a 3D surface so that the user gets in return an audio description according to the touched area. With a similar idea, the authors in <ref type="bibr" target="#b20">[21]</ref> propose to track the user's gesture with a depth camera to guide the tactile exploration. In <ref type="bibr" target="#b17">[18]</ref> authors propose to build 3D models and make them accessible to blind people via a haptic module, and in <ref type="bibr" target="#b21">[22]</ref> authors developed a prototype in which blind users can explore an entire location combining tactile and audio descriptions. Another example is <ref type="bibr" target="#b22">[23]</ref>. 3D printing is a widely studied technology that has been investigated for applications in cultural heritage domain for several purposes such as preservation, restoration or dissemination, just to name a few <ref type="bibr" target="#b23">[24,</ref><ref type="bibr" target="#b24">25,</ref><ref type="bibr" target="#b25">26]</ref>. Some of these applications include accessibility for people with visual impairments. The work presented in <ref type="bibr" target="#b26">[27]</ref> present a procedure for 3D printing specifically designed for blind people. In <ref type="bibr" target="#b27">[28]</ref> the authors analyse scanning and printing techniques for the specific target of blind users accessing cultural content, while <ref type="bibr" target="#b28">[29]</ref> presents an evaluation of the user experience with 3D printed replicas. In <ref type="bibr" target="#b29">[30]</ref> the authors propose to increase accessibility of a permanent exhibition printing enlarged museum specimens to promote interactive and inclusive experiences. Other studies can be found in <ref type="bibr" target="#b30">[31,</ref><ref type="bibr" target="#b31">32]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Attention-driven Applied to 3D models</head><p>One of the challenges of accessibility is to develop a methodology to effectively create a presentation offering the same experience to different people. In particular, for visitors with visual impairments we have to exclude one of the most used senses for visual arts -sight. The question that follows is: what are the visual features that make us characterise an object? And are these features also interesting for a blind person? In this regard, we propose to address this problem exploiting visual saliency. When looking at an object, our gaze unintentionally lingers on some specific areas. Indeed, by tracking the eye movements while observing some subject, we can detect which regions are visually more interesting for our sight. Analysing the eye behaviour of many subjects observing the same scene allows us to compute the so-called fixation map. The concept of fixation map was introduced in 2002 by D.S. Wooding <ref type="bibr" target="#b32">[33]</ref> and consists in defining a function that outputs the amount of visual attention for a given image location.</p><p>Following works aim at predicting fixation maps based on image features such as symmetry <ref type="bibr" target="#b33">[34]</ref> or using data-driven approaches <ref type="bibr" target="#b34">[35,</ref><ref type="bibr" target="#b35">36,</ref><ref type="bibr" target="#b36">37]</ref>. Since gaze estimation is closely related to human vision behaviour, fixation prediction models are often associated to salient object detection <ref type="bibr" target="#b37">[38,</ref><ref type="bibr" target="#b38">39]</ref> or used to drive other tasks, such as classification or segmentation.</p><p>In this work we propose to apply state-of-the-art fixation prediction models to the acquired 3D objects and use the resulting fixation maps to drive the 3D printing. Starting from the acquired object with texture, we rotate the 3D mesh according a reference system and create a projection on a virtual plane that is perpendicular to the original object orientation. In this way, we can use the projected texture as input for the fixation prediction and identify the areas that would result more attractive for an observer. Exploiting the 3D acquisition of the objects, we can project back the visually relevant areas and adapt the printing process and some presentation aspects according to the fixation results. The main goal of the described process is to focus on the most salient object regions so that visitors touching the printed object can have a better understanding of the artefacts in all its parts.</p><p>Figure <ref type="figure" target="#fig_0">1</ref> summarises the proposed pipeline for acquisition and printing. First, the 3D scan of all the objects is performed, followed by some post-processing steps on raw data to improve the surface quality. The core part of our pipeline involves the application of fixation prediction on the acquired projected texture. This allows to effectively recognise the salient areas of the object that will guide the printing process. Finally, models are prepared and printed using two different technologies. In the remaining we describe a case-study where we exploited the attention mechanism for two aspects: first, we adapted the resolution of 3D printing according to the relevance; second, we focused on the most relevant regions and printed them separately with a different technology to offer a better reading.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Case-Study: Scanning and Printing of Coats of Arms</head><p>Palazzo Bo is one of the most iconic buildings in Padova: its rooms are adorned with over three thousand heraldic Coats of Arms depicted in frescoes and carved in stone (see Figure <ref type="figure" target="#fig_1">2</ref>, left). These objects represent people who held prestigious academic positions, therefore their presence offer unique insights into the history and culture of the place. However, traditional display methods limit accessibility for individuals with visual impairments. After an initial discussion with the museum staff, we concluded that reproducing the Coats of Arms was the most suitable choice for the project, for two main reasons: (i) Coats of Arms are omnipresent throughout the museum, adorning every wall and hall, so they are the most distinctive and prevalent feature; (ii) the museum staff usually face challenges in explaining the coats of arms to visually impaired visitors. We adopted two different 3D scanners for data acquisition, the EinScan Pro HD from Shining 3D (EinScan) and the Revopoint POP 3 from Revopoint 3D Technologies Inc. (POP3). The choice of using two similar tools derives from the intention of comparing a high-end instrument such as the EinScan (around 14, 000 Euros) with a low cost device (POP3 is around 700 Euros) with the idea that institutions with limited budget could possibly benefit from the same technique. Both devices are handheld and are able to capture the scene by manually moving the device around the object so that different points of view are acquired and automatically registered by the complementary software. The EinScan offers different acquisition modes: the HD mode offers an accuracy of 0.045 mm, and acquires 3000 points per second, while the Rapid Scan mode offers a maximum accuracy of 0.1 mm. The POP3 has a precision up to 0.5 mm at a working distance of 150 − 400 mm. Figure <ref type="figure">3</ref> gives an overview of the main post-processing steps, including noise reduction, point cloud alignment, hole filling and surface reconstruction, performed for each acquired object in order to obtain a printable mesh. The first step involves the removal of all points that are not part of the object itself, such as the background, then the following part consists in obtaining a watertight surface starting from the point cloud, i.e. generating vertices, normals and closing the holes. Finally, since the objects are fixed to the walls of the room, their back need to be reconstructed as a plane so that after printing it can be put on a horizontal surface. This is visible in the rightmost image of Figure <ref type="figure">3</ref>, where we can notice the additional thickness added to create a planar base. After the characterisation of salient object areas, we adapted the 3D models, isolated the identified regions of interest, and proceeded with model preparation for printing. We decided to employ different technologies to print different areas of the objects and offer a better readibility according to the fixation maps (see Section 4.1 for details). In particular, we adopted fused deposition modelling (FDM) and stereolitography (SLA). The FDM technology was chosen to print the 3D model of the complete objects. FDM is a material extrusion technique in which a thermoplastic polymer filament is heated and a movable head proceeds to deposit the material layer by layer. We employed the Creality CR-10 Smart Pro 3D printer. This printer has a print size of 300 × 300 × 400 mm, offers a printing precision of ±0.1mm. In Figure <ref type="figure" target="#fig_1">2</ref> we show an image taken while printing a complete object with white material. The second technology we employed is SLA, used to print the surface details requiring an higher accuracy. It is a vat polymerisation method, wherein layers of a liquid contained in a vat are successively exposed to ultraviolet (UV) light. The liquid material reacts to incoming light, resulting in curing only the areas exposed to UV and causing selective solidification. We used the Formlabs Form 3 printer, characterised by a laser spot size of 85 microns, by a build volume of 145 × 145 × 185mm and a layer thickness of 25 − 300 microns. Figure <ref type="figure" target="#fig_1">2</ref> shows the completed print of a selected inscription detail: the object grows layer by layer from top to bottom, and thus a support structure in this case is needed while the printing proceeds.  Usually, a higher number of points suggests a higher accuracy: looking at Einscan acquisitions, we can observe that objects A, B and D have ≈ 1𝑀 points, while object F has ≈ 6𝑀 points due to the HD mode that was selected only for the last object. Regarding the POP3 acquisitions, objects C, D and E exhibit roughly half the point with respect to the other objects, denoting a lower surface resolution. Object D was acquired with the two scanners to assess the feasibility and analyse possible limitations of different devices. The EinScan shows a higher resolution, while the surface acquired by the POP3 is smoother and exhibits a less marked inscription. Despite the inherent challenges of manual acquisition, the POP3 managed to yield satisfactory results, largely attributed to the capabilities of its software (Revoscan 5), which played an important role in refining the acquired data. After acquisition, we used the acquired models to generate 2-dimensional texture projections on a plane and obtain an RGB image.</p><p>Figure <ref type="figure" target="#fig_3">5</ref> shows the images used as input for visual saliency. We applied the fixation prediction method as proposed in <ref type="bibr" target="#b36">[37]</ref> and used the original weights as provided by the authors. The resulting visual attention is shown in Figure <ref type="figure" target="#fig_3">5</ref>, applied to two of our objects. We plot fixation maps with a colour scale representing different levels of attention, where 0 value means no attention and 1 indicates the maximum attention. The third and sixth images of Figure <ref type="figure" target="#fig_3">5</ref> show the masking applied to the objects according to the attention map, so that we have a clear interpretation regarding the most salient areas on the objects surfaces. We can notice that for all the analysed objects, we can identify two to three interesting areas exhibiting the highest attention, depending on the individual object features. For all items, one part resulting particularly interesting for our sight is the central part of the Coats of Arms, depicting the symbol representing its owner. Another interesting area for objects A and B is the small cherub on the top, while for other objects (D, E, F) the upper areas do not result particularly relevant. Finally, for some objects (e.g. item F) also the bottom part with inscriptions results attractive.</p><p>We concluded that the central parts of the objects need to be printed with higher details and also to be highlighted during presentation. Also, we focused on the printing of the cherubs and the inscriptions on the lower parts of the objects to improve readability. First, we printed the entire objects using FDM printing technique setting the layer height to 0.2mm and an infill density of 15%, Figure <ref type="figure" target="#fig_4">6</ref> (left) shows an object printed with PLA: the overall quality is good, except from some flat areas in which the printing layers are clearly recognizable. In particular, this is quite evident in some details (see Figure <ref type="figure" target="#fig_4">6</ref>, center), where the resolution results altered by printing layers. Following these observations SLA printing was adopted to reproduce regions with the highest visual attention. We used Grey v4 resin, well-suited for general-purpose prototyping, particularly for models demanding intricate details, similar to ours. Figure <ref type="figure" target="#fig_4">6</ref> (right) shows a detail printed with SLA technology, offering higher resolution and a better understanding of the underlying surface details.</p><p>As a preliminary result, we involved in a survey two individuals with visual impairments who volunteered to provide initial feedback. The main goal of the session was to determine the objects general usefulness, and also to assess the quality improvement offered by the direct application of visual attention prediction. During the survey some challenges were identified, particularly regarding the initial comprehension of the objects and the influence of printing layers on readability. In particular, the perception of printing layer for FDM is significative, and brings the need of explaining that they are not part of the object. Also, differences between FDM and SLA printing were noted, suggesting the need for refinement of printing techniques to optimize tactile perception. Regarding the effectiveness of the visual attention approach, during the survey we took note on which surface regions resulted more interesting from the tactile point of view, and we observed that the regions highlighted by the fixation prediction were the most attractive surface areas during the tactile examination. Moreover, the participants appreciated the SLA-printed details as helpful means to improve their understanding of the whole object. Overall, the project was deemed useful in providing tactile representations of the Coats of Arms, facilitating comprehension and engagement. As a future work, we aim to extend the survey in a more structured way, collecting feedbacks from a wider range of people, and performing an extensive study about object readability driven by visual attention and fixation prediciton mechanisms.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>In this paper we propose to merge 3D reconstruction and printing techniques with computer vision algorithms to enhance the experience of visually impaired visitors. We present an attention-driven method which exploits 3D scanning of artefacts and applies to the printing process of cultural heritage content. A preliminary study involving a survey highlights the effectiveness of the method, giving a strong direction for future improvements and investigations of the proposed method.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>ØFigure 1 :</head><label>1</label><figDesc>Figure 1: Our pipeline for acquisition and attention-driven printing.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: From left to right: a wall detail of the Great Hall in Palazzo Bo with hanged Coats of Arms, 3D acquisition process and printing with different technologies.</figDesc><graphic coords="3,379.61,65.62,95.63,110.54" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :Figure 4 :</head><label>34</label><figDesc>Figure 3: Post-processing steps performed after raw data acquisition.</figDesc><graphic coords="4,74.79,193.44,71.93,96.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Fixation prediction applied to the acquired Coats of Arms. First row shows the input data coming from the 3D model, second row the masked object when the saliency map (in the third row) is applied.</figDesc><graphic coords="5,82.12,217.52,69.43,99.21" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Coats of Arms printed with white PLA with some inscription details where printing layers are relevant.</figDesc><graphic coords="6,240.27,65.72,102.27,136.50" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Acquisition results for all the acquired objects with two different 3D scanners.</figDesc><table><row><cell></cell><cell cols="2">EinScan</cell><cell>POP3</cell><cell></cell></row><row><cell>Object</cell><cell>Points</cell><cell cols="3">Triangles Points Triangles</cell></row><row><cell>A</cell><cell>955,344</cell><cell>1,489,891</cell><cell>-</cell><cell>-</cell></row><row><cell>B</cell><cell>946,244</cell><cell>1,556,750</cell><cell>-</cell><cell>-</cell></row><row><cell>C</cell><cell>-</cell><cell>-</cell><cell cols="2">647,394 1,959,020</cell></row><row><cell>D</cell><cell>941,649</cell><cell cols="3">1,463,119 558,870 1,080,209</cell></row><row><cell>E</cell><cell>-</cell><cell>-</cell><cell cols="2">614,255 1,896,255</cell></row><row><cell>F</cell><cell cols="2">6,284,169 8,666,750</cell><cell>-</cell><cell>-</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Results</head><p>We acquired 6 coats of arms: Figure <ref type="figure">4</ref> shows all of them with their identifiers. Table <ref type="table">1</ref> summarises the final results in terms of acquired points (raw data) and number of triangles for each object and device.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This study was funded by the European Union -NextGenerationEU, in the framework of the iNEST -Interconnected Nord-Est Innovation Ecosystem (iNEST ECS_00000043 -CUP H43C22000540006). The views and opinions expressed are solely those of the authors and do not necessarily reflect those of the European Union, nor can the European Union be held responsible for them.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Three-dimensional reconstruction of roman coins from photometric image sets</title>
		<author>
			<persName><forename type="first">L</forename><surname>Macdonald</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Moitinho De Almeida</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hess</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Electronic Imaging</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="page" from="11017" to="011017" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">3d reconstruction of cultural heritage sites as an educational approach. the sanctuary of delphi</title>
		<author>
			<persName><forename type="first">I</forename><surname>Liritzis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Volonakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Vosinakis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied Sciences</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page">3635</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Robust joint selection of camera orientations and feature projections over multiple views</title>
		<author>
			<persName><forename type="first">M</forename><surname>Pistellato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Albarelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Bergamasco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Torsello</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICPR.2016.7900210</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings -International Conference on Pattern Recognition</title>
				<meeting>-International Conference on Pattern Recognition</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">0</biblScope>
			<biblScope unit="page" from="3703" to="3708" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">3d reconstruction methods for digital preservation of cultural heritage: A survey</title>
		<author>
			<persName><forename type="first">L</forename><surname>Gomes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">R P</forename><surname>Bellon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Silva</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pattern Recognition Letters</title>
		<imprint>
			<biblScope unit="volume">50</biblScope>
			<biblScope unit="page" from="3" to="14" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Image based 3d reconstruction in cultural heritage preservation</title>
		<author>
			<persName><forename type="first">A</forename><surname>Cefalu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Abdel-Wahab</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Peter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Wenzel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Fritsch</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ICINCO</title>
		<imprint>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="201" to="205" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Diachronic 3d reconstruction for lost cultural heritage, The International Archives of the Photogrammetry</title>
		<author>
			<persName><forename type="first">G</forename><surname>Guidi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Russo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Remote Sensing and Spatial Information Sciences</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Robust cylinder estimation in point clouds from pairwise axes similarities</title>
		<author>
			<persName><forename type="first">M</forename><surname>Pistellato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Bergamasco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Albarelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Torsello</surname></persName>
		</author>
		<idno type="DOI">10.5220/0007401706400647</idno>
	</analytic>
	<monogr>
		<title level="m">ICPRAM 2019 -Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="640" to="647" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Geolocating time: Digitisation and reverse engineering of a roman sundial</title>
		<author>
			<persName><forename type="first">M</forename><surname>Pistellato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Traviglia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Bergamasco</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-66096-3_11</idno>
	</analytic>
	<monogr>
		<title level="j">LNCS</title>
		<imprint>
			<biblScope unit="volume">12536</biblScope>
			<biblScope unit="page" from="143" to="158" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Virtual restoration and virtual reconstruction in cultural heritage: terminology, methodologies, visual representation techniques and cognitive models</title>
		<author>
			<persName><forename type="first">E</forename><surname>Pietroni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ferdani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page">167</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">3d reconstruction for a cultural heritage virtual tour system</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Bastanlar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Grammalidis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zabulis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Yilmaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yardimci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Triantafyllidis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="page" from="1023" to="1036" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">On-the-go reflectance transformation imaging with ordinary smartphones</title>
		<author>
			<persName><forename type="first">M</forename><surname>Pistellato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Bergamasco</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-25056-9_17</idno>
	</analytic>
	<monogr>
		<title level="j">LNCS</title>
		<imprint>
			<biblScope unit="volume">13801</biblScope>
			<biblScope unit="page" from="251" to="267" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Enhancing accessibility to cultural heritage through digital content and virtual reality: A case study of the sarmizegetusa regia unesco site</title>
		<author>
			<persName><forename type="first">R</forename><surname>Comes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Neamt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><forename type="middle">L</forename><surname>Buna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Bodi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Popescu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Tompa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ghinea</surname></persName>
		</author>
		<author>
			<persName><surname>Mateescu-Suciu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Ancient History And Archaeology</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Enhancing accessibility in cultural heritage environments: considerations for social computing</title>
		<author>
			<persName><forename type="first">P</forename><surname>Kosmas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Galanakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Constantinou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Drossis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Christofi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Klironomos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Zaphiris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Antona</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Stephanidis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Universal Access in the Information Society</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="page" from="471" to="482" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">The virtualdiver project. making greece&apos;s underwater cultural heritage accessible to the public</title>
		<author>
			<persName><forename type="first">G</forename><surname>Pehlivanides</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Monastiridis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Tourtas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Karyati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Ioannidis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Bejelou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Antoniou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Nomikou</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied Sciences</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page">8172</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Improving accessibility to cultural heritage for people with intellectual disabilities: A tool for observing the obstacles and facilitators for the access to knowledge</title>
		<author>
			<persName><forename type="first">M</forename><surname>Mastrogiuseppe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Span</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Bortolotti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Alter</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page" from="113" to="123" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">An evaluation tool for physical accessibility of cultural heritage buildings</title>
		<author>
			<persName><forename type="first">J</forename><surname>Marín-Nicolás</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">P</forename><surname>Sáez-Pérez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sustainability</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page">15251</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Cultural heritage and disability: can ict be the &apos;missing piece&apos;to face cultural heritage accessibility problems?</title>
		<author>
			<persName><forename type="first">A</forename><surname>Arenghi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Agostiano</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Smart Objects and Technologies for Social Good: Second International Conference</title>
				<meeting><address><addrLine>Venice, Italy</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016-11-30">2016. November 30-December 1, 2017</date>
			<biblScope unit="page" from="70" to="77" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">A portable system to build 3d models of cultural heritage and to allow their exploration by blind people</title>
		<author>
			<persName><forename type="first">F</forename><surname>De Felice</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Gramegna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Renna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Attolico</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Distante</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Workshop on Haptic Audio Visual Environments and their Applications</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page">6</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Touch screen exploration of visual artwork for blind people</title>
		<author>
			<persName><forename type="first">D</forename><surname>Ahmetovic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Kwon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Oh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Bernareggi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mascetti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Web Conference 2021</title>
				<meeting>the Web Conference 2021</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="2781" to="2791" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Tooteko: A case study of augmented reality for an accessible cultural heritage. digitization, 3d printing and sensors for an audio-tactile experience</title>
		<author>
			<persName><forename type="first">F</forename><surname>D'agnano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Balletti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Guerra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Vernier</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences</title>
		<imprint>
			<biblScope unit="volume">40</biblScope>
			<biblScope unit="page" from="207" to="213" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Gesture-based interactive audio guide on tactile reliefs</title>
		<author>
			<persName><forename type="first">A</forename><surname>Reichinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Fuhrmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Maierhofer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Purgathofer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility</title>
				<meeting>the 18th International ACM SIGACCESS Conference on Computers and Accessibility</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="91" to="100" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Enabling access to cultural heritage for the visually impaired: an interactive 3d model of a cultural site</title>
		<author>
			<persName><forename type="first">V</forename><surname>Rossetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Furfari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Leporini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Pelagatti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Quarta</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Procedia computer science</title>
		<imprint>
			<biblScope unit="volume">130</biblScope>
			<biblScope unit="page" from="383" to="391" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Accessible visual artworks for blind and visually impaired people: comparing a multimodal approach with tactile graphics</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">Cavazos</forename><surname>Quero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Iranzo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Bartolomé</surname></persName>
		</author>
		<author>
			<persName><surname>Cho</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Electronics</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Selected methods of making three-dimensional virtual models of museum ceramic objects</title>
		<author>
			<persName><forename type="first">J</forename><surname>Montusiewicz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Czyż</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Kayumov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied Computer Science</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="51" to="65" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<title level="m" type="main">High definition 3d-scanning of arts objects and paintings, Optical 3-D measurement technqiues VIII</title>
		<author>
			<persName><forename type="first">D</forename><surname>Akca</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gruen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Breuckmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Lahanier</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2007">2007</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="50" to="58" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Neumüller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Reichinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Rist</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Kern</surname></persName>
		</author>
		<title level="m">3d printing for cultural heritage: Preservation, accessibility, research and education, 3D research challenges in cultural heritage: a roadmap in digital heritage preservation</title>
				<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="119" to="134" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Preparation of 3d models of cultural heritage objects to be recognised by touch by the blind-case studies</title>
		<author>
			<persName><forename type="first">J</forename><surname>Montusiewicz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Barszcz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Korga</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied Sciences</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page">11910</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">do touch!&quot;-3d scanning and printing technologies for the haptic representation of cultural assets: A study with blind target users</title>
		<author>
			<persName><forename type="first">A</forename><surname>Bruns</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Spiesberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Triantafyllopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Müller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">W</forename><surname>Schuller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 5th Workshop on analySis, Understanding and proMotion of heritAge Contents</title>
				<meeting>the 5th Workshop on analySis, Understanding and proMotion of heritAge Contents</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="21" to="28" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Evaluation of touchable 3d-printed replicas in museums</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">F</forename><surname>Wilson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Stott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Warnett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Attridge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">P</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Williams</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Museum Journal</title>
		<imprint>
			<biblScope unit="volume">60</biblScope>
			<biblScope unit="page" from="445" to="465" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note>Curator</note>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Data for 3d printing enlarged museum specimens for the visually impaired</title>
		<author>
			<persName><forename type="first">A</forename><surname>Du Plessis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Els</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Le Roux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tshibalanganda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Pretorius</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Gigabyte</title>
		<imprint>
			<date type="published" when="2020">2020. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Designing 3-d prints for blind and partially sighted audiences in museums: exploring the needs of those living with sight loss</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">F</forename><surname>Wilson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Griffiths</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Williams</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">P</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Williams</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Visitor Studies</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="120" to="140" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Multimodal 3d printed urban maps for blind people. evaluations and scientific investigations</title>
		<author>
			<persName><forename type="first">M</forename><surname>Telesinska</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility</title>
				<meeting>the 25th International ACM SIGACCESS Conference on Computers and Accessibility</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="1" to="7" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Fixation maps: quantifying eye-movement traces</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">S</forename><surname>Wooding</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2002 symposium on Eye tracking research &amp; applications</title>
				<meeting>the 2002 symposium on Eye tracking research &amp; applications</meeting>
		<imprint>
			<date type="published" when="2002">2002</date>
			<biblScope unit="page" from="31" to="36" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Prediction of human eye fixations using symmetry</title>
		<author>
			<persName><forename type="first">G</forename><surname>Koostra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">R</forename><surname>Schomaker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Annual Meeting of the Cognitive Science Society</title>
				<meeting>the Annual Meeting of the Cognitive Science Society</meeting>
		<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="volume">31</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Deepfix: A fully convolutional neural network for predicting human eye fixations</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Kruthiventi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Ayush</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">V</forename><surname>Babu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Image Processing</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="page" from="4446" to="4456" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">Deep visual attention prediction</title>
		<author>
			<persName><forename type="first">W</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Shen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Image Processing</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="page" from="2368" to="2378" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">Rinet: Relative importance-aware network for fixation prediction</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Zeng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Multimedia</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="page" from="9263" to="9277" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">Salient object detection driven by fixation prediction</title>
		<author>
			<persName><forename type="first">W</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Shen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Dong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Borji</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE conference on computer vision and pattern recognition</title>
				<meeting>the IEEE conference on computer vision and pattern recognition</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">Learning saliency from fixations</title>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">A D</forename><surname>Djilali</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Mcguinness</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>O'connor</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision</title>
				<meeting>the IEEE/CVF Winter Conference on Applications of Computer Vision</meeting>
		<imprint>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="383" to="393" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
