<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Saliency-driven 3D Reconstruction and Printing for Accessible Museums</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Cristiana Sofica</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Elisa Vargiu</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mara Pistellato</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lucia Lionello</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gianmaria Concheri</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>DAIS, Università Ca'Foscari di Venezia.</institution>
          <addr-line>155 via Torino, Venezia</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Università degli Studi di Padova. 1, Lungargine del Piovego</institution>
          ,
          <addr-line>Padova</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Three-dimensional acquisition and reproduction technologies are often exploited in cultural heritage field for a variety of applications such as conservation, restoration, and dissemination. Another valuable use of 3D data is to make exhibitions more accessible to visitors with impairments, allowing them to fully experience and enjoy the acquired objects. In this short paper, we explore the accessibility inherently provided by 3D representations of real-world objects, with a particular focus on the quality of the models and 3D printing, as well as the presentation aspects. To this end, we propose to apply a state-of-the-art saliency-driven process, generating a fixation map that identifies the object's salient areas that need to be reproduced with a higher definition during the 3D printing to improve the object accessibility. We present a case-study involving the full process of 3D scanning and printing the Coats of Arms in Palazzo Bo (Padova, Italy) to make them accessible to visitors with visual impairment. We employed diferent scanning techniques and applied the attention mechanism on acquired data to obtain the object salient areas and drive the printing process accordingly. Preliminary tests involving some participant feedback reveal that printing the objects with a variable detail level allows the visitors to have a better understanding of the object as a whole and to appreciate the relevant details.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Cultural heritage</kwd>
        <kwd>3D reconstruction</kwd>
        <kwd>3D printing</kwd>
        <kwd>Fixation prediction</kwd>
        <kwd>Accessibility</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In recent years, advancements in digital technology revolutionized the way we document, preserve,
and share cultural artefacts. Beyond any doubt, one of these tools is 3D reconstruction, comprising
a vast set of methods for acquiring objects, from coins [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] to entire cities [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]. Such methods are
largely employed in the cultural heritage domain for preservation [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ], analysis [
        <xref ref-type="bibr" rid="ref6 ref7 ref8">6, 7, 8</xref>
        ], restoration
[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] or dissemination such as virtual tours [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] or interactive visualisations [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Nowadays art and
culture need to be accessible to everyone: an additional application of 3D reconstruction is to enhance
accessibility of heritage objects for everyone. This can be done for example by making available digital
content to users [
        <xref ref-type="bibr" rid="ref12 ref13">12, 13</xref>
        ] or providing access to remote sites that are not easily reachable (e.g. underwater
locations [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]). Another crucial part of inclusiveness focuses on individuals with disabilities [
        <xref ref-type="bibr" rid="ref15 ref16">15, 16</xref>
        ].
This is often declined not only as producing the content itself or ensuring physical accessibility, but
also in actively ofering the same experience to people with imparities. Also in this case, technology
ofers a valid set of tools to implement this applications [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], enhancing the accessibility for a wide
range of visitors categories. In this work we aim at embedding computer vision techniques directly in
the 3D reconstruction and printing processes, with the final goal of adapting state-of-the-art saliency
models to drive the printing process and enhance the experience of visually impaired people. This is
carried out by exploiting the well-known set of techniques falling under the term of saliency detection
and fixation prediction . Such models exploit fixation maps acquired by capturing real eye movements of
Ø Clean up and hole filling
Ø Borders reconstruction
Ø Extrusion of borders
      </p>
      <sec id="sec-1-1">
        <title>Preparing 3D printing</title>
        <p>Ø Scaling the model to size
Ø Laying the model flat
Ø Slicing of the model</p>
      </sec>
      <sec id="sec-1-2">
        <title>3D printing</title>
        <p>people looking at the same subject. Additionally, we present a case-study involving the 3D scanning
and printing of the Coats of Arms on display at Palazzo Bo (Padova, Italy). We scanned six objects and
3D printed them following the fixation map derived from the projection of the acquired surfaces. The
main goal of the project is to create a reduced tactile “Coats of Arms wall”, so that their significance
and meaning are available for visually impaired people. A preliminary study including feedbacks from
blind people shows the feasibility and the potential value of the project.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Related Works</title>
      <p>
        Accessibility for visually impaired individuals refers to the design of services, environments, and
technologies that enable people with visual impairments to participate in society. Ensuring access to
cultural heritage to visually impaired individuals can be implemented in several ways, for instance
providing audio descriptions [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], accessible digital content with specific applications and technologies
[19] or with tactile models. In [20] the authors propose a ring-like device to use while exploring a 3D
surface so that the user gets in return an audio description according to the touched area. With a similar
idea, the authors in [21] propose to track the user’s gesture with a depth camera to guide the tactile
exploration. In [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] authors propose to build 3D models and make them accessible to blind people via a
haptic module, and in [22] authors developed a prototype in which blind users can explore an entire
location combining tactile and audio descriptions. Another example is [23]. 3D printing is a widely
studied technology that has been investigated for applications in cultural heritage domain for several
purposes such as preservation, restoration or dissemination, just to name a few [24, 25, 26]. Some of
these applications include accessibility for people with visual impairments. The work presented in [27]
present a procedure for 3D printing specifically designed for blind people. In [ 28] the authors analyse
scanning and printing techniques for the specific target of blind users accessing cultural content, while
[29] presents an evaluation of the user experience with 3D printed replicas. In [30] the authors propose
to increase accessibility of a permanent exhibition printing enlarged museum specimens to promote
interactive and inclusive experiences. Other studies can be found in [31, 32].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Attention-driven Applied to 3D models</title>
      <p>One of the challenges of accessibility is to develop a methodology to efectively create a presentation
ofering the same experience to diferent people. In particular, for visitors with visual impairments we
have to exclude one of the most used senses for visual arts – sight. The question that follows is: what
are the visual features that make us characterise an object? And are these features also interesting for
a blind person? In this regard, we propose to address this problem exploiting visual saliency. When
looking at an object, our gaze unintentionally lingers on some specific areas. Indeed, by tracking the eye
movements while observing some subject, we can detect which regions are visually more interesting for
our sight. Analysing the eye behaviour of many subjects observing the same scene allows us to compute
the so-called fixation map . The concept of fixation map was introduced in 2002 by D.S. Wooding [ 33]
and consists in defining a function that outputs the amount of visual attention for a given image location.
Following works aim at predicting fixation maps based on image features such as symmetry [ 34] or
using data-driven approaches [35, 36, 37]. Since gaze estimation is closely related to human vision
behaviour, fixation prediction models are often associated to salient object detection [ 38, 39] or used to
drive other tasks, such as classification or segmentation.</p>
      <p>In this work we propose to apply state-of-the-art fixation prediction models to the acquired 3D
objects and use the resulting fixation maps to drive the 3D printing. Starting from the acquired object
with texture, we rotate the 3D mesh according a reference system and create a projection on a virtual
plane that is perpendicular to the original object orientation. In this way, we can use the projected
texture as input for the fixation prediction and identify the areas that would result more attractive for
an observer. Exploiting the 3D acquisition of the objects, we can project back the visually relevant
areas and adapt the printing process and some presentation aspects according to the xfiation results.
The main goal of the described process is to focus on the most salient object regions so that visitors
touching the printed object can have a better understanding of the artefacts in all its parts.</p>
      <p>Figure 1 summarises the proposed pipeline for acquisition and printing. First, the 3D scan of all
the objects is performed, followed by some post-processing steps on raw data to improve the surface
quality. The core part of our pipeline involves the application of fixation prediction on the acquired
projected texture. This allows to efectively recognise the salient areas of the object that will guide
the printing process. Finally, models are prepared and printed using two diferent technologies. In the
remaining we describe a case-study where we exploited the attention mechanism for two aspects: first,
we adapted the resolution of 3D printing according to the relevance; second, we focused on the most
relevant regions and printed them separately with a diferent technology to ofer a better reading.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Case-Study: Scanning and Printing of Coats of Arms</title>
      <p>Palazzo Bo is one of the most iconic buildings in Padova: its rooms are adorned with over three
thousand heraldic Coats of Arms depicted in frescoes and carved in stone (see Figure 2, left). These
objects represent people who held prestigious academic positions, therefore their presence ofer unique
insights into the history and culture of the place. However, traditional display methods limit accessibility
for individuals with visual impairments. After an initial discussion with the museum staf, we concluded
that reproducing the Coats of Arms was the most suitable choice for the project, for two main reasons:
(i) Coats of Arms are omnipresent throughout the museum, adorning every wall and hall, so they are the
most distinctive and prevalent feature; (ii) the museum staf usually face challenges in explaining the
coats of arms to visually impaired visitors. We adopted two diferent 3D scanners for data acquisition,
the EinScan Pro HD from Shining 3D (EinScan) and the Revopoint POP 3 from Revopoint 3D Technologies
Inc. (POP3). The choice of using two similar tools derives from the intention of comparing a high-end
instrument such as the EinScan (around 14, 000 Euros) with a low cost device (POP3 is around 700
Euros) with the idea that institutions with limited budget could possibly benefit from the same technique.
Both devices are handheld and are able to capture the scene by manually moving the device around the
object so that diferent points of view are acquired and automatically registered by the complementary
software. The EinScan ofers diferent acquisition modes: the HD mode ofers an accuracy of 0.045 mm,
Raw data
cleaning</p>
      <p>Hole
filling</p>
      <p>Back
and acquires 3000 points per second, while the Rapid Scan mode ofers a maximum accuracy of 0.1
mm. The POP3 has a precision up to 0.5 mm at a working distance of 150 − 400 mm. Figure 3 gives
an overview of the main post-processing steps, including noise reduction, point cloud alignment, hole
iflling and surface reconstruction, performed for each acquired object in order to obtain a printable
mesh. The first step involves the removal of all points that are not part of the object itself, such as the
background, then the following part consists in obtaining a watertight surface starting from the point
cloud, i.e. generating vertices, normals and closing the holes. Finally, since the objects are fixed to the
walls of the room, their back need to be reconstructed as a plane so that after printing it can be put on a
horizontal surface. This is visible in the rightmost image of Figure 3, where we can notice the additional
thickness added to create a planar base. After the characterisation of salient object areas, we adapted
the 3D models, isolated the identified regions of interest, and proceeded with model preparation for
printing. We decided to employ diferent technologies to print diferent areas of the objects and ofer a
better readibility according to the fixation maps (see Section 4.1 for details). In particular, we adopted
fused deposition modelling (FDM) and stereolitography (SLA). The FDM technology was chosen to print
the 3D model of the complete objects. FDM is a material extrusion technique in which a thermoplastic
polymer filament is heated and a movable head proceeds to deposit the material layer by layer. We
employed the Creality CR-10 Smart Pro 3D printer. This printer has a print size of 300 × 300 × 400
mm, ofers a printing precision of ± 0.1mm. In Figure 2 we show an image taken while printing a
complete object with white material. The second technology we employed is SLA, used to print the
surface details requiring an higher accuracy. It is a vat polymerisation method, wherein layers of a
liquid contained in a vat are successively exposed to ultraviolet (UV) light. The liquid material reacts to
incoming light, resulting in curing only the areas exposed to UV and causing selective solidification.
We used the Formlabs Form 3 printer, characterised by a laser spot size of 85 microns, by a build volume
of 145 × 145 × 185mm and a layer thickness of 25 − 300 microns. Figure 2 shows the completed print
of a selected inscription detail: the object grows layer by layer from top to bottom, and thus a support
structure in this case is needed while the printing proceeds.
4.1. Results
We acquired 6 coats of arms: Figure 4 shows all of them with their identifiers. Table 1 summarises the
ifnal results in terms of acquired points (raw data) and number of triangles for each object and device.
Usually, a higher number of points suggests a higher accuracy: looking at Einscan acquisitions, we can
observe that objects A, B and D have ≈ 1 points, while object F has ≈ 6 points due to the HD
mode that was selected only for the last object. Regarding the POP3 acquisitions, objects C, D and E
exhibit roughly half the point with respect to the other objects, denoting a lower surface resolution.
Object D was acquired with the two scanners to assess the feasibility and analyse possible limitations
of diferent devices. The EinScan shows a higher resolution, while the surface acquired by the POP3 is
smoother and exhibits a less marked inscription. Despite the inherent challenges of manual acquisition,
the POP3 managed to yield satisfactory results, largely attributed to the capabilities of its software
(Revoscan 5), which played an important role in refining the acquired data. After acquisition, we used
the acquired models to generate 2-dimensional texture projections on a plane and obtain an RGB image.
Figure 5 shows the images used as input for visual saliency. We applied the fixation prediction method
as proposed in [37] and used the original weights as provided by the authors. The resulting visual
attention is shown in Figure 5, applied to two of our objects. We plot fixation maps with a colour
scale representing diferent levels of attention, where 0 value means no attention and 1 indicates the
maximum attention. The third and sixth images of Figure 5 show the masking applied to the objects
according to the attention map, so that we have a clear interpretation regarding the most salient areas
on the objects surfaces. We can notice that for all the analysed objects, we can identify two to three
interesting areas exhibiting the highest attention, depending on the individual object features. For
all items, one part resulting particularly interesting for our sight is the central part of the Coats of
Arms, depicting the symbol representing its owner. Another interesting area for objects A and B is
the small cherub on the top, while for other objects (D, E, F) the upper areas do not result particularly
relevant. Finally, for some objects (e.g. item F) also the bottom part with inscriptions results attractive.
We concluded that the central parts of the objects need to be printed with higher details and also to be
highlighted during presentation. Also, we focused on the printing of the cherubs and the inscriptions
on the lower parts of the objects to improve readability.</p>
      <p>First, we printed the entire objects using FDM printing technique setting the layer height to 0.2mm
and an infill density of 15%, Figure 6 (left) shows an object printed with PLA: the overall quality is good,
except from some flat areas in which the printing layers are clearly recognizable. In particular, this is
quite evident in some details (see Figure 6, center), where the resolution results altered by printing
layers. Following these observations SLA printing was adopted to reproduce regions with the highest
visual attention. We used Grey v4 resin, well-suited for general-purpose prototyping, particularly for
models demanding intricate details, similar to ours. Figure 6 (right) shows a detail printed with SLA
technology, ofering higher resolution and a better understanding of the underlying surface details.</p>
      <p>As a preliminary result, we involved in a survey two individuals with visual impairments who
volunteered to provide initial feedback. The main goal of the session was to determine the objects
general usefulness, and also to assess the quality improvement ofered by the direct application of visual
attention prediction. During the survey some challenges were identified, particularly regarding the
initial comprehension of the objects and the influence of printing layers on readability. In particular,
the perception of printing layer for FDM is significative, and brings the need of explaining that they are
not part of the object. Also, diferences between FDM and SLA printing were noted, suggesting the
need for refinement of printing techniques to optimize tactile perception. Regarding the efectiveness
of the visual attention approach, during the survey we took note on which surface regions resulted
more interesting from the tactile point of view, and we observed that the regions highlighted by the
ifxation prediction were the most attractive surface areas during the tactile examination. Moreover, the
participants appreciated the SLA-printed details as helpful means to improve their understanding of the
whole object. Overall, the project was deemed useful in providing tactile representations of the Coats
of Arms, facilitating comprehension and engagement. As a future work, we aim to extend the survey in
a more structured way, collecting feedbacks from a wider range of people, and performing an extensive
study about object readability driven by visual attention and fixation prediciton mechanisms.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>In this paper we propose to merge 3D reconstruction and printing techniques with computer vision
algorithms to enhance the experience of visually impaired visitors. We present an attention-driven
method which exploits 3D scanning of artefacts and applies to the printing process of cultural heritage
content. A preliminary study involving a survey highlights the efectiveness of the method, giving a
strong direction for future improvements and investigations of the proposed method.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This study was funded by the European Union - NextGenerationEU, in the framework of the iNEST
Interconnected Nord-Est Innovation Ecosystem (iNEST ECS_00000043 – CUP H43C22000540006). The
views and opinions expressed are solely those of the authors and do not necessarily reflect those of the
European Union, nor can the European Union be held responsible for them.
on Haptic Audio Visual Environments and their Applications, IEEE, 2005, pp. 6–pp.
[19] D. Ahmetovic, N. Kwon, U. Oh, C. Bernareggi, S. Mascetti, Touch screen exploration of visual
artwork for blind people, in: Proceedings of the Web Conference 2021, 2021, pp. 2781–2791.
[20] F. D’Agnano, C. Balletti, F. Guerra, P. Vernier, et al., Tooteko: A case study of augmented reality for
an accessible cultural heritage. digitization, 3d printing and sensors for an audio-tactile experience,
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information
Sciences 40 (2015) 207–213.
[21] A. Reichinger, A. Fuhrmann, S. Maierhofer, W. Purgathofer, Gesture-based interactive audio guide
on tactile reliefs, in: Proceedings of the 18th International ACM SIGACCESS Conference on
Computers and Accessibility, 2016, pp. 91–100.
[22] V. Rossetti, F. Furfari, B. Leporini, S. Pelagatti, A. Quarta, Enabling access to cultural heritage for
the visually impaired: an interactive 3d model of a cultural site, Procedia computer science 130
(2018) 383–391.
[23] L. Cavazos Quero, J. Iranzo Bartolomé, J. Cho, Accessible visual artworks for blind and visually
impaired people: comparing a multimodal approach with tactile graphics, Electronics 10 (2021).
[24] J. Montusiewicz, Z. Czyż, R. Kayumov, Selected methods of making three-dimensional virtual
models of museum ceramic objects, Applied Computer Science 11 (2015) 51–65.
[25] D. Akca, A. Gruen, B. Breuckmann, C. Lahanier, High definition 3d-scanning of arts objects and
paintings, Optical 3-D measurement technqiues VIII 2 (2007) 50–58.
[26] M. Neumüller, A. Reichinger, F. Rist, C. Kern, 3d printing for cultural heritage: Preservation,
accessibility, research and education, 3D research challenges in cultural heritage: a roadmap in
digital heritage preservation (2014) 119–134.
[27] J. Montusiewicz, M. Barszcz, S. Korga, Preparation of 3d models of cultural heritage objects to be
recognised by touch by the blind—case studies, Applied Sciences 12 (2022) 11910.
[28] A. Bruns, A. A. Spiesberger, A. Triantafyllopoulos, P. Müller, B. W. Schuller, " do touch!"-3d
scanning and printing technologies for the haptic representation of cultural assets: A study with
blind target users, in: Proceedings of the 5th Workshop on analySis, Understanding and proMotion
of heritAge Contents, 2023, pp. 21–28.
[29] P. F. Wilson, J. Stott, J. M. Warnett, A. Attridge, M. P. Smith, M. A. Williams, Evaluation of
touchable 3d-printed replicas in museums, Curator: The Museum Journal 60 (2017) 445–465.
[30] A. du Plessis, J. Els, S. le Roux, M. Tshibalanganda, T. Pretorius, Data for 3d printing enlarged
museum specimens for the visually impaired, Gigabyte 2020 (2020).
[31] P. F. Wilson, S. Grifiths, E. Williams, M. P. Smith, M. A. Williams, Designing 3-d prints for blind
and partially sighted audiences in museums: exploring the needs of those living with sight loss,
Visitor Studies 23 (2020) 120–140.
[32] M. Telesinska, Multimodal 3d printed urban maps for blind people. evaluations and scientific
investigations, in: Proceedings of the 25th International ACM SIGACCESS Conference on Computers
and Accessibility, 2023, pp. 1–7.
[33] D. S. Wooding, Fixation maps: quantifying eye-movement traces, in: Proceedings of the 2002
symposium on Eye tracking research &amp; applications, 2002, pp. 31–36.
[34] G. Koostra, L. R. Schomaker, Prediction of human eye fixations using symmetry, in: Proceedings
of the Annual Meeting of the Cognitive Science Society, volume 31, 2009.
[35] S. S. Kruthiventi, K. Ayush, R. V. Babu, Deepfix: A fully convolutional neural network for predicting
human eye fixations, IEEE Transactions on Image Processing 26 (2017) 4446–4456.
[36] W. Wang, J. Shen, Deep visual attention prediction, IEEE Transactions on Image Processing 27
(2017) 2368–2378.
[37] Y. Song, Z. Liu, G. Li, D. Zeng, T. Zhang, L. Xu, J. Wang, Rinet: Relative importance-aware network
for fixation prediction, IEEE Transactions on Multimedia 25 (2023) 9263–9277.
[38] W. Wang, J. Shen, X. Dong, A. Borji, Salient object detection driven by fixation prediction, in:</p>
      <p>Proceedings of the IEEE conference on computer vision and pattern recognition, 2018.
[39] Y. A. D. Djilali, K. McGuinness, N. O’Connor, Learning saliency from fixations, in: Proceedings of
the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024, pp. 383–393.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L.</given-names>
            <surname>MacDonald</surname>
          </string-name>
          , V. Moitinho de Almeida, M. Hess,
          <article-title>Three-dimensional reconstruction of roman coins from photometric image sets</article-title>
          ,
          <source>Journal of Electronic Imaging</source>
          <volume>26</volume>
          (
          <year>2017</year>
          )
          <fpage>011017</fpage>
          -
          <lpage>011017</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>I.</given-names>
            <surname>Liritzis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Volonakis</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Vosinakis,</surname>
          </string-name>
          <article-title>3d reconstruction of cultural heritage sites as an educational approach</article-title>
          .
          <source>the sanctuary of delphi, Applied Sciences</source>
          <volume>11</volume>
          (
          <year>2021</year>
          )
          <fpage>3635</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pistellato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Albarelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bergamasco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Torsello</surname>
          </string-name>
          ,
          <article-title>Robust joint selection of camera orientations and feature projections over multiple views</article-title>
          ,
          <source>in: Proceedings - International Conference on Pattern Recognition</source>
          , volume
          <volume>0</volume>
          ,
          <year>2016</year>
          , p.
          <fpage>3703</fpage>
          -
          <lpage>3708</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICPR.
          <year>2016</year>
          .
          <volume>7900210</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L.</given-names>
            <surname>Gomes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O. R. P.</given-names>
            <surname>Bellon</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. Silva,</surname>
          </string-name>
          <article-title>3d reconstruction methods for digital preservation of cultural heritage: A survey</article-title>
          ,
          <source>Pattern Recognition Letters</source>
          <volume>50</volume>
          (
          <year>2014</year>
          )
          <fpage>3</fpage>
          -
          <lpage>14</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Cefalu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Abdel-Wahab</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Peter</surname>
          </string-name>
          , K. Wenzel, D. Fritsch,
          <article-title>Image based 3d reconstruction in cultural heritage preservation</article-title>
          .,
          <source>in: ICINCO (1)</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>201</fpage>
          -
          <lpage>205</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>G.</given-names>
            <surname>Guidi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <article-title>Diachronic 3d reconstruction for lost cultural heritage</article-title>
          ,
          <source>The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences</source>
          <volume>38</volume>
          (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pistellato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bergamasco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Albarelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Torsello</surname>
          </string-name>
          ,
          <article-title>Robust cylinder estimation in point clouds from pairwise axes similarities</article-title>
          ,
          <source>in: ICPRAM 2019 - Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods</source>
          ,
          <year>2019</year>
          , p.
          <fpage>640</fpage>
          -
          <lpage>647</lpage>
          . doi:
          <volume>10</volume>
          .5220/0007401706400647.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pistellato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Traviglia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bergamasco</surname>
          </string-name>
          ,
          <article-title>Geolocating time: Digitisation and reverse engineering of a roman sundial</article-title>
          ,
          <source>Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 12536 LNCS</source>
          (
          <year>2020</year>
          )
          <fpage>143</fpage>
          -
          <lpage>158</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -66096-3_
          <fpage>11</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>E.</given-names>
            <surname>Pietroni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ferdani</surname>
          </string-name>
          ,
          <article-title>Virtual restoration and virtual reconstruction in cultural heritage: terminology, methodologies, visual representation techniques and cognitive models</article-title>
          ,
          <source>Information</source>
          <volume>12</volume>
          (
          <year>2021</year>
          )
          <fpage>167</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bastanlar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Grammalidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zabulis</surname>
          </string-name>
          , E. Yilmaz,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yardimci</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Triantafyllidis, 3d reconstruction for a cultural heritage virtual tour system</article-title>
          ,
          <source>Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci</source>
          <volume>37</volume>
          (
          <year>2008</year>
          )
          <fpage>1023</fpage>
          -
          <lpage>1036</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pistellato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bergamasco</surname>
          </string-name>
          ,
          <article-title>On-the-go reflectance transformation imaging with ordinary smartphones</article-title>
          ,
          <source>Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 13801 LNCS</source>
          (
          <year>2023</year>
          )
          <fpage>251</fpage>
          -
          <lpage>267</lpage>
          . doi:
          <volume>10</volume>
          .1007/ 978-3-
          <fpage>031</fpage>
          -25056-9_
          <fpage>17</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>R.</given-names>
            <surname>Comes</surname>
          </string-name>
          , C. Neam t,u,
          <string-name>
            <given-names>Z. L.</given-names>
            <surname>Buna</surname>
          </string-name>
          , Bodi,
          <string-name>
            <given-names>D.</given-names>
            <surname>Popescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Tompa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ghinea</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Mateescu-Suciu</surname>
          </string-name>
          ,
          <article-title>Enhancing accessibility to cultural heritage through digital content and virtual reality: A case study of the sarmizegetusa regia unesco site</article-title>
          ,
          <source>Journal of Ancient History And Archaeology</source>
          <volume>7</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>P.</given-names>
            <surname>Kosmas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Galanakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Constantinou</surname>
          </string-name>
          , G. Drossis,
          <string-name>
            <given-names>M.</given-names>
            <surname>Christofi</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Klironomos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Zaphiris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Antona</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Stephanidis</surname>
          </string-name>
          ,
          <article-title>Enhancing accessibility in cultural heritage environments: considerations for social computing</article-title>
          ,
          <source>Universal Access in the Information Society</source>
          <volume>19</volume>
          (
          <year>2020</year>
          )
          <fpage>471</fpage>
          -
          <lpage>482</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>G.</given-names>
            <surname>Pehlivanides</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Monastiridis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tourtas</surname>
          </string-name>
          , E. Karyati, G. Ioannidis,
          <string-name>
            <given-names>K.</given-names>
            <surname>Bejelou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Antoniou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Nomikou</surname>
          </string-name>
          ,
          <article-title>The virtualdiver project. making greece's underwater cultural heritage accessible to the public</article-title>
          ,
          <source>Applied Sciences</source>
          <volume>10</volume>
          (
          <year>2020</year>
          )
          <fpage>8172</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mastrogiuseppe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Span</surname>
          </string-name>
          , E. Bortolotti,
          <article-title>Improving accessibility to cultural heritage for people with intellectual disabilities: A tool for observing the obstacles and facilitators for the access to knowledge</article-title>
          ,
          <source>Alter</source>
          <volume>15</volume>
          (
          <year>2021</year>
          )
          <fpage>113</fpage>
          -
          <lpage>123</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>J.</given-names>
            <surname>Marín-Nicolás</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Sáez-Pérez</surname>
          </string-name>
          ,
          <article-title>An evaluation tool for physical accessibility of cultural heritage buildings</article-title>
          ,
          <source>Sustainability</source>
          <volume>14</volume>
          (
          <year>2022</year>
          )
          <fpage>15251</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>A.</given-names>
            <surname>Arenghi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Agostiano</surname>
          </string-name>
          ,
          <article-title>Cultural heritage and disability: can ict be the 'missing piece'to face cultural heritage accessibility problems?, in: Smart Objects and Technologies for Social Good</article-title>
          : Second International Conference,
          <year>2016</year>
          , Venice, Italy,
          <source>November 30-December 1</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>70</fpage>
          -
          <lpage>77</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>F. De Felice</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Gramegna</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Renna</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Attolico</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Distante</surname>
          </string-name>
          ,
          <article-title>A portable system to build 3d models of cultural heritage and to allow their exploration by blind people</article-title>
          , in: IEEE International Workshop
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>