=Paper=
{{Paper
|id=Vol-3865/05_paper
|storemode=property
|title=Saliency-driven 3D Reconstruction and Printing for Accessible Museums (short paper)
|pdfUrl=https://ceur-ws.org/Vol-3865/05_paper.pdf
|volume=Vol-3865
|authors=Cristiana Sofica,Elisa Vargiu,Mara Pistellato,Lucia Lionello,Gianmaria Concheri
|dblpUrl=https://dblp.org/rec/conf/aiia/SoficaVLC24
}}
==Saliency-driven 3D Reconstruction and Printing for Accessible Museums (short paper)
==
.
Saliency-driven 3D Reconstruction and Printing for
Accessible Museums
Cristiana Sofica1 , Elisa Vargiu1 , Mara Pistellato2,* , Lucia Lionello1 and
Gianmaria Concheri1
1
Università degli Studi di Padova. 1, Lungargine del Piovego, Padova, Italy
2
DAIS, Università Ca’Foscari di Venezia. 155 via Torino, Venezia, Italy
Abstract
Three-dimensional acquisition and reproduction technologies are often exploited in cultural heritage field for a
variety of applications such as conservation, restoration, and dissemination. Another valuable use of 3D data is
to make exhibitions more accessible to visitors with impairments, allowing them to fully experience and enjoy
the acquired objects. In this short paper, we explore the accessibility inherently provided by 3D representations
of real-world objects, with a particular focus on the quality of the models and 3D printing, as well as the presen-
tation aspects. To this end, we propose to apply a state-of-the-art saliency-driven process, generating a fixation
map that identifies the object’s salient areas that need to be reproduced with a higher definition during the 3D
printing to improve the object accessibility. We present a case-study involving the full process of 3D scanning
and printing the Coats of Arms in Palazzo Bo (Padova, Italy) to make them accessible to visitors with visual
impairment. We employed different scanning techniques and applied the attention mechanism on acquired data
to obtain the object salient areas and drive the printing process accordingly. Preliminary tests involving some
participant feedback reveal that printing the objects with a variable detail level allows the visitors to have a
better understanding of the object as a whole and to appreciate the relevant details.
Keywords
Cultural heritage, 3D reconstruction, 3D printing, Fixation prediction, Accessibility
1. Introduction
In recent years, advancements in digital technology revolutionized the way we document, preserve,
and share cultural artefacts. Beyond any doubt, one of these tools is 3D reconstruction, comprising
a vast set of methods for acquiring objects, from coins [1] to entire cities [2, 3]. Such methods are
largely employed in the cultural heritage domain for preservation [4, 5], analysis [6, 7, 8], restoration
[9] or dissemination such as virtual tours [10] or interactive visualisations [11]. Nowadays art and
culture need to be accessible to everyone: an additional application of 3D reconstruction is to enhance
accessibility of heritage objects for everyone. This can be done for example by making available digital
content to users [12, 13] or providing access to remote sites that are not easily reachable (e.g. underwater
locations [14]). Another crucial part of inclusiveness focuses on individuals with disabilities [15, 16].
This is often declined not only as producing the content itself or ensuring physical accessibility, but
also in actively offering the same experience to people with imparities. Also in this case, technology
offers a valid set of tools to implement this applications [17], enhancing the accessibility for a wide
range of visitors categories. In this work we aim at embedding computer vision techniques directly in
the 3D reconstruction and printing processes, with the final goal of adapting state-of-the-art saliency
models to drive the printing process and enhance the experience of visually impaired people. This is
carried out by exploiting the well-known set of techniques falling under the term of saliency detection
and fixation prediction. Such models exploit fixation maps acquired by capturing real eye movements of
3rd Workshop on Artificial Intelligence for Cultural Heritage (AI4CH 2024, https:// ai4ch.di.unito.it/ ), co-located with the 23nd
International Conference of the Italian Association for Artificial Intelligence (AIxIA 2024). 26-28 November 2024, Bolzano, Italy
*
Corresponding author.
" cristianasimona.sofica@unipd.it (C. Sofica); elisa.vargiu@studenti.unipd.it (E. Vargiu); mara.pistellato@unive.it
(M. Pistellato); lucia.lionello@unipd.it (L. Lionello); gianmaria.concheri@unipd.it (G. Concheri)
© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
Data Post Fixation Preparing
3D printing
Acquisition Processing Prediction 3D printing
Ø Clean up and hole filling Ø Scaling the model to size
Ø Borders reconstruction Ø Laying the model flat
Ø Extrusion of borders Ø Slicing of the model
Figure 1: Our pipeline for acquisition and attention-driven printing.
people looking at the same subject. Additionally, we present a case-study involving the 3D scanning
and printing of the Coats of Arms on display at Palazzo Bo (Padova, Italy). We scanned six objects and
3D printed them following the fixation map derived from the projection of the acquired surfaces. The
main goal of the project is to create a reduced tactile “Coats of Arms wall”, so that their significance
and meaning are available for visually impaired people. A preliminary study including feedbacks from
blind people shows the feasibility and the potential value of the project.
2. Related Works
Accessibility for visually impaired individuals refers to the design of services, environments, and
technologies that enable people with visual impairments to participate in society. Ensuring access to
cultural heritage to visually impaired individuals can be implemented in several ways, for instance
providing audio descriptions [18], accessible digital content with specific applications and technologies
[19] or with tactile models. In [20] the authors propose a ring-like device to use while exploring a 3D
surface so that the user gets in return an audio description according to the touched area. With a similar
idea, the authors in [21] propose to track the user’s gesture with a depth camera to guide the tactile
exploration. In [18] authors propose to build 3D models and make them accessible to blind people via a
haptic module, and in [22] authors developed a prototype in which blind users can explore an entire
location combining tactile and audio descriptions. Another example is [23]. 3D printing is a widely
studied technology that has been investigated for applications in cultural heritage domain for several
purposes such as preservation, restoration or dissemination, just to name a few [24, 25, 26]. Some of
these applications include accessibility for people with visual impairments. The work presented in [27]
present a procedure for 3D printing specifically designed for blind people. In [28] the authors analyse
scanning and printing techniques for the specific target of blind users accessing cultural content, while
[29] presents an evaluation of the user experience with 3D printed replicas. In [30] the authors propose
to increase accessibility of a permanent exhibition printing enlarged museum specimens to promote
interactive and inclusive experiences. Other studies can be found in [31, 32].
3. Attention-driven Applied to 3D models
One of the challenges of accessibility is to develop a methodology to effectively create a presentation
offering the same experience to different people. In particular, for visitors with visual impairments we
have to exclude one of the most used senses for visual arts – sight. The question that follows is: what
are the visual features that make us characterise an object? And are these features also interesting for
a blind person? In this regard, we propose to address this problem exploiting visual saliency. When
looking at an object, our gaze unintentionally lingers on some specific areas. Indeed, by tracking the eye
movements while observing some subject, we can detect which regions are visually more interesting for
our sight. Analysing the eye behaviour of many subjects observing the same scene allows us to compute
the so-called fixation map. The concept of fixation map was introduced in 2002 by D.S. Wooding [33]
and consists in defining a function that outputs the amount of visual attention for a given image location.
Following works aim at predicting fixation maps based on image features such as symmetry [34] or
Figure 2: From left to right: a wall detail of the Great Hall in Palazzo Bo with hanged Coats of Arms, 3D
acquisition process and printing with different technologies.
using data-driven approaches [35, 36, 37]. Since gaze estimation is closely related to human vision
behaviour, fixation prediction models are often associated to salient object detection [38, 39] or used to
drive other tasks, such as classification or segmentation.
In this work we propose to apply state-of-the-art fixation prediction models to the acquired 3D
objects and use the resulting fixation maps to drive the 3D printing. Starting from the acquired object
with texture, we rotate the 3D mesh according a reference system and create a projection on a virtual
plane that is perpendicular to the original object orientation. In this way, we can use the projected
texture as input for the fixation prediction and identify the areas that would result more attractive for
an observer. Exploiting the 3D acquisition of the objects, we can project back the visually relevant
areas and adapt the printing process and some presentation aspects according to the fixation results.
The main goal of the described process is to focus on the most salient object regions so that visitors
touching the printed object can have a better understanding of the artefacts in all its parts.
Figure 1 summarises the proposed pipeline for acquisition and printing. First, the 3D scan of all
the objects is performed, followed by some post-processing steps on raw data to improve the surface
quality. The core part of our pipeline involves the application of fixation prediction on the acquired
projected texture. This allows to effectively recognise the salient areas of the object that will guide
the printing process. Finally, models are prepared and printed using two different technologies. In the
remaining we describe a case-study where we exploited the attention mechanism for two aspects: first,
we adapted the resolution of 3D printing according to the relevance; second, we focused on the most
relevant regions and printed them separately with a different technology to offer a better reading.
4. Case-Study: Scanning and Printing of Coats of Arms
Palazzo Bo is one of the most iconic buildings in Padova: its rooms are adorned with over three
thousand heraldic Coats of Arms depicted in frescoes and carved in stone (see Figure 2, left). These
objects represent people who held prestigious academic positions, therefore their presence offer unique
insights into the history and culture of the place. However, traditional display methods limit accessibility
for individuals with visual impairments. After an initial discussion with the museum staff, we concluded
that reproducing the Coats of Arms was the most suitable choice for the project, for two main reasons:
(i) Coats of Arms are omnipresent throughout the museum, adorning every wall and hall, so they are the
most distinctive and prevalent feature; (ii) the museum staff usually face challenges in explaining the
coats of arms to visually impaired visitors. We adopted two different 3D scanners for data acquisition,
the EinScan Pro HD from Shining 3D (EinScan) and the Revopoint POP 3 from Revopoint 3D Technologies
Inc. (POP3). The choice of using two similar tools derives from the intention of comparing a high-end
instrument such as the EinScan (around 14, 000 Euros) with a low cost device (POP3 is around 700
Euros) with the idea that institutions with limited budget could possibly benefit from the same technique.
Both devices are handheld and are able to capture the scene by manually moving the device around the
object so that different points of view are acquired and automatically registered by the complementary
software. The EinScan offers different acquisition modes: the HD mode offers an accuracy of 0.045 mm,
Raw data Hole Back
cleaning filling
Figure 3: Post-processing steps performed after raw data acquisition.
A B C D E F
Figure 4: Acquired Coats of Arms with identifiers.
and acquires 3000 points per second, while the Rapid Scan mode offers a maximum accuracy of 0.1
mm. The POP3 has a precision up to 0.5 mm at a working distance of 150 − 400 mm. Figure 3 gives
an overview of the main post-processing steps, including noise reduction, point cloud alignment, hole
filling and surface reconstruction, performed for each acquired object in order to obtain a printable
mesh. The first step involves the removal of all points that are not part of the object itself, such as the
background, then the following part consists in obtaining a watertight surface starting from the point
cloud, i.e. generating vertices, normals and closing the holes. Finally, since the objects are fixed to the
walls of the room, their back need to be reconstructed as a plane so that after printing it can be put on a
horizontal surface. This is visible in the rightmost image of Figure 3, where we can notice the additional
thickness added to create a planar base. After the characterisation of salient object areas, we adapted
the 3D models, isolated the identified regions of interest, and proceeded with model preparation for
printing. We decided to employ different technologies to print different areas of the objects and offer a
better readibility according to the fixation maps (see Section 4.1 for details). In particular, we adopted
fused deposition modelling (FDM) and stereolitography (SLA). The FDM technology was chosen to print
the 3D model of the complete objects. FDM is a material extrusion technique in which a thermoplastic
polymer filament is heated and a movable head proceeds to deposit the material layer by layer. We
employed the Creality CR-10 Smart Pro 3D printer. This printer has a print size of 300 × 300 × 400
mm, offers a printing precision of ±0.1mm. In Figure 2 we show an image taken while printing a
complete object with white material. The second technology we employed is SLA, used to print the
surface details requiring an higher accuracy. It is a vat polymerisation method, wherein layers of a
liquid contained in a vat are successively exposed to ultraviolet (UV) light. The liquid material reacts to
incoming light, resulting in curing only the areas exposed to UV and causing selective solidification.
We used the Formlabs Form 3 printer, characterised by a laser spot size of 85 microns, by a build volume
of 145 × 145 × 185mm and a layer thickness of 25 − 300 microns. Figure 2 shows the completed print
of a selected inscription detail: the object grows layer by layer from top to bottom, and thus a support
structure in this case is needed while the printing proceeds.
4.1. Results
We acquired 6 coats of arms: Figure 4 shows all of them with their identifiers. Table 1 summarises the
final results in terms of acquired points (raw data) and number of triangles for each object and device.
Table 1
Acquisition results for all the acquired objects with two different 3D scanners.
EinScan POP3
Object Points Triangles Points Triangles
A 955,344 1,489,891 - -
B 946,244 1,556,750 - -
C - - 647,394 1,959,020
D 941,649 1,463,119 558,870 1,080,209
E - - 614,255 1,896,255
F 6,284,169 8,666,750 - -
Figure 5: Fixation prediction applied to the acquired Coats of Arms. First row shows the input data coming
from the 3D model, second row the masked object when the saliency map (in the third row) is applied.
Usually, a higher number of points suggests a higher accuracy: looking at Einscan acquisitions, we can
observe that objects A, B and D have ≈ 1𝑀 points, while object F has ≈ 6𝑀 points due to the HD
mode that was selected only for the last object. Regarding the POP3 acquisitions, objects C, D and E
exhibit roughly half the point with respect to the other objects, denoting a lower surface resolution.
Object D was acquired with the two scanners to assess the feasibility and analyse possible limitations
of different devices. The EinScan shows a higher resolution, while the surface acquired by the POP3 is
smoother and exhibits a less marked inscription. Despite the inherent challenges of manual acquisition,
the POP3 managed to yield satisfactory results, largely attributed to the capabilities of its software
(Revoscan 5), which played an important role in refining the acquired data. After acquisition, we used
the acquired models to generate 2-dimensional texture projections on a plane and obtain an RGB image.
Figure 5 shows the images used as input for visual saliency. We applied the fixation prediction method
as proposed in [37] and used the original weights as provided by the authors. The resulting visual
attention is shown in Figure 5, applied to two of our objects. We plot fixation maps with a colour
scale representing different levels of attention, where 0 value means no attention and 1 indicates the
maximum attention. The third and sixth images of Figure 5 show the masking applied to the objects
according to the attention map, so that we have a clear interpretation regarding the most salient areas
on the objects surfaces. We can notice that for all the analysed objects, we can identify two to three
interesting areas exhibiting the highest attention, depending on the individual object features. For
all items, one part resulting particularly interesting for our sight is the central part of the Coats of
Arms, depicting the symbol representing its owner. Another interesting area for objects A and B is
the small cherub on the top, while for other objects (D, E, F) the upper areas do not result particularly
relevant. Finally, for some objects (e.g. item F) also the bottom part with inscriptions results attractive.
We concluded that the central parts of the objects need to be printed with higher details and also to be
highlighted during presentation. Also, we focused on the printing of the cherubs and the inscriptions
on the lower parts of the objects to improve readability.
First, we printed the entire objects using FDM printing technique setting the layer height to 0.2mm
and an infill density of 15%, Figure 6 (left) shows an object printed with PLA: the overall quality is good,
except from some flat areas in which the printing layers are clearly recognizable. In particular, this is
quite evident in some details (see Figure 6, center), where the resolution results altered by printing
Figure 6: Coats of Arms printed with white PLA with some inscription details where printing layers are relevant.
layers. Following these observations SLA printing was adopted to reproduce regions with the highest
visual attention. We used Grey v4 resin, well-suited for general-purpose prototyping, particularly for
models demanding intricate details, similar to ours. Figure 6 (right) shows a detail printed with SLA
technology, offering higher resolution and a better understanding of the underlying surface details.
As a preliminary result, we involved in a survey two individuals with visual impairments who
volunteered to provide initial feedback. The main goal of the session was to determine the objects
general usefulness, and also to assess the quality improvement offered by the direct application of visual
attention prediction. During the survey some challenges were identified, particularly regarding the
initial comprehension of the objects and the influence of printing layers on readability. In particular,
the perception of printing layer for FDM is significative, and brings the need of explaining that they are
not part of the object. Also, differences between FDM and SLA printing were noted, suggesting the
need for refinement of printing techniques to optimize tactile perception. Regarding the effectiveness
of the visual attention approach, during the survey we took note on which surface regions resulted
more interesting from the tactile point of view, and we observed that the regions highlighted by the
fixation prediction were the most attractive surface areas during the tactile examination. Moreover, the
participants appreciated the SLA-printed details as helpful means to improve their understanding of the
whole object. Overall, the project was deemed useful in providing tactile representations of the Coats
of Arms, facilitating comprehension and engagement. As a future work, we aim to extend the survey in
a more structured way, collecting feedbacks from a wider range of people, and performing an extensive
study about object readability driven by visual attention and fixation prediciton mechanisms.
5. Conclusion
In this paper we propose to merge 3D reconstruction and printing techniques with computer vision
algorithms to enhance the experience of visually impaired visitors. We present an attention-driven
method which exploits 3D scanning of artefacts and applies to the printing process of cultural heritage
content. A preliminary study involving a survey highlights the effectiveness of the method, giving a
strong direction for future improvements and investigations of the proposed method.
Acknowledgments
This study was funded by the European Union - NextGenerationEU, in the framework of the iNEST -
Interconnected Nord-Est Innovation Ecosystem (iNEST ECS_00000043 – CUP H43C22000540006). The
views and opinions expressed are solely those of the authors and do not necessarily reflect those of the
European Union, nor can the European Union be held responsible for them.
References
[1] L. MacDonald, V. Moitinho de Almeida, M. Hess, Three-dimensional reconstruction of roman
coins from photometric image sets, Journal of Electronic Imaging 26 (2017) 011017–011017.
[2] I. Liritzis, P. Volonakis, S. Vosinakis, 3d reconstruction of cultural heritage sites as an educational
approach. the sanctuary of delphi, Applied Sciences 11 (2021) 3635.
[3] M. Pistellato, A. Albarelli, F. Bergamasco, A. Torsello, Robust joint selection of camera orientations
and feature projections over multiple views, in: Proceedings - International Conference on Pattern
Recognition, volume 0, 2016, p. 3703 – 3708. doi:10.1109/ICPR.2016.7900210.
[4] L. Gomes, O. R. P. Bellon, L. Silva, 3d reconstruction methods for digital preservation of cultural
heritage: A survey, Pattern Recognition Letters 50 (2014) 3–14.
[5] A. Cefalu, M. Abdel-Wahab, M. Peter, K. Wenzel, D. Fritsch, Image based 3d reconstruction in
cultural heritage preservation., in: ICINCO (1), 2013, pp. 201–205.
[6] G. Guidi, M. Russo, Diachronic 3d reconstruction for lost cultural heritage, The International
Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 38 (2012).
[7] M. Pistellato, F. Bergamasco, A. Albarelli, A. Torsello, Robust cylinder estimation in point
clouds from pairwise axes similarities, in: ICPRAM 2019 - Proceedings of the 8th Inter-
national Conference on Pattern Recognition Applications and Methods, 2019, p. 640 – 647.
doi:10.5220/0007401706400647.
[8] M. Pistellato, A. Traviglia, F. Bergamasco, Geolocating time: Digitisation and reverse engineering
of a roman sundial, Lecture Notes in Computer Science (including subseries Lecture Notes
in Artificial Intelligence and Lecture Notes in Bioinformatics) 12536 LNCS (2020) 143 – 158.
doi:10.1007/978-3-030-66096-3_11.
[9] E. Pietroni, D. Ferdani, Virtual restoration and virtual reconstruction in cultural heritage: termi-
nology, methodologies, visual representation techniques and cognitive models, Information 12
(2021) 167.
[10] Y. Bastanlar, N. Grammalidis, X. Zabulis, E. Yilmaz, Y. Yardimci, G. Triantafyllidis, 3d reconstruction
for a cultural heritage virtual tour system, Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 37
(2008) 1023–1036.
[11] M. Pistellato, F. Bergamasco, On-the-go reflectance transformation imaging with ordinary smart-
phones, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial
Intelligence and Lecture Notes in Bioinformatics) 13801 LNCS (2023) 251 – 267. doi:10.1007/
978-3-031-25056-9_17.
[12] R. Comes, C. Neamt, u, Z. L. Buna, Bodi, D. Popescu, V. Tompa, R. Ghinea, L. Mateescu-Suciu,
Enhancing accessibility to cultural heritage through digital content and virtual reality: A case
study of the sarmizegetusa regia unesco site, Journal of Ancient History And Archaeology 7 (2020).
[13] P. Kosmas, G. Galanakis, V. Constantinou, G. Drossis, M. Christofi, I. Klironomos, P. Zaphiris,
M. Antona, C. Stephanidis, Enhancing accessibility in cultural heritage environments: considera-
tions for social computing, Universal Access in the Information Society 19 (2020) 471–482.
[14] G. Pehlivanides, K. Monastiridis, A. Tourtas, E. Karyati, G. Ioannidis, K. Bejelou, V. Antoniou,
P. Nomikou, The virtualdiver project. making greece’s underwater cultural heritage accessible to
the public, Applied Sciences 10 (2020) 8172.
[15] M. Mastrogiuseppe, S. Span, E. Bortolotti, Improving accessibility to cultural heritage for people
with intellectual disabilities: A tool for observing the obstacles and facilitators for the access to
knowledge, Alter 15 (2021) 113–123.
[16] J. Marín-Nicolás, M. P. Sáez-Pérez, An evaluation tool for physical accessibility of cultural heritage
buildings, Sustainability 14 (2022) 15251.
[17] A. Arenghi, M. Agostiano, Cultural heritage and disability: can ict be the ‘missing piece’to face
cultural heritage accessibility problems?, in: Smart Objects and Technologies for Social Good:
Second International Conference, 2016, Venice, Italy, November 30–December 1, 2017, pp. 70–77.
[18] F. De Felice, T. Gramegna, F. Renna, G. Attolico, A. Distante, A portable system to build 3d models
of cultural heritage and to allow their exploration by blind people, in: IEEE International Workshop
on Haptic Audio Visual Environments and their Applications, IEEE, 2005, pp. 6–pp.
[19] D. Ahmetovic, N. Kwon, U. Oh, C. Bernareggi, S. Mascetti, Touch screen exploration of visual
artwork for blind people, in: Proceedings of the Web Conference 2021, 2021, pp. 2781–2791.
[20] F. D’Agnano, C. Balletti, F. Guerra, P. Vernier, et al., Tooteko: A case study of augmented reality for
an accessible cultural heritage. digitization, 3d printing and sensors for an audio-tactile experience,
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information
Sciences 40 (2015) 207–213.
[21] A. Reichinger, A. Fuhrmann, S. Maierhofer, W. Purgathofer, Gesture-based interactive audio guide
on tactile reliefs, in: Proceedings of the 18th International ACM SIGACCESS Conference on
Computers and Accessibility, 2016, pp. 91–100.
[22] V. Rossetti, F. Furfari, B. Leporini, S. Pelagatti, A. Quarta, Enabling access to cultural heritage for
the visually impaired: an interactive 3d model of a cultural site, Procedia computer science 130
(2018) 383–391.
[23] L. Cavazos Quero, J. Iranzo Bartolomé, J. Cho, Accessible visual artworks for blind and visually
impaired people: comparing a multimodal approach with tactile graphics, Electronics 10 (2021).
[24] J. Montusiewicz, Z. Czyż, R. Kayumov, Selected methods of making three-dimensional virtual
models of museum ceramic objects, Applied Computer Science 11 (2015) 51–65.
[25] D. Akca, A. Gruen, B. Breuckmann, C. Lahanier, High definition 3d-scanning of arts objects and
paintings, Optical 3-D measurement technqiues VIII 2 (2007) 50–58.
[26] M. Neumüller, A. Reichinger, F. Rist, C. Kern, 3d printing for cultural heritage: Preservation,
accessibility, research and education, 3D research challenges in cultural heritage: a roadmap in
digital heritage preservation (2014) 119–134.
[27] J. Montusiewicz, M. Barszcz, S. Korga, Preparation of 3d models of cultural heritage objects to be
recognised by touch by the blind—case studies, Applied Sciences 12 (2022) 11910.
[28] A. Bruns, A. A. Spiesberger, A. Triantafyllopoulos, P. Müller, B. W. Schuller, " do touch!"-3d
scanning and printing technologies for the haptic representation of cultural assets: A study with
blind target users, in: Proceedings of the 5th Workshop on analySis, Understanding and proMotion
of heritAge Contents, 2023, pp. 21–28.
[29] P. F. Wilson, J. Stott, J. M. Warnett, A. Attridge, M. P. Smith, M. A. Williams, Evaluation of
touchable 3d-printed replicas in museums, Curator: The Museum Journal 60 (2017) 445–465.
[30] A. du Plessis, J. Els, S. le Roux, M. Tshibalanganda, T. Pretorius, Data for 3d printing enlarged
museum specimens for the visually impaired, Gigabyte 2020 (2020).
[31] P. F. Wilson, S. Griffiths, E. Williams, M. P. Smith, M. A. Williams, Designing 3-d prints for blind
and partially sighted audiences in museums: exploring the needs of those living with sight loss,
Visitor Studies 23 (2020) 120–140.
[32] M. Telesinska, Multimodal 3d printed urban maps for blind people. evaluations and scientific inves-
tigations, in: Proceedings of the 25th International ACM SIGACCESS Conference on Computers
and Accessibility, 2023, pp. 1–7.
[33] D. S. Wooding, Fixation maps: quantifying eye-movement traces, in: Proceedings of the 2002
symposium on Eye tracking research & applications, 2002, pp. 31–36.
[34] G. Koostra, L. R. Schomaker, Prediction of human eye fixations using symmetry, in: Proceedings
of the Annual Meeting of the Cognitive Science Society, volume 31, 2009.
[35] S. S. Kruthiventi, K. Ayush, R. V. Babu, Deepfix: A fully convolutional neural network for predicting
human eye fixations, IEEE Transactions on Image Processing 26 (2017) 4446–4456.
[36] W. Wang, J. Shen, Deep visual attention prediction, IEEE Transactions on Image Processing 27
(2017) 2368–2378.
[37] Y. Song, Z. Liu, G. Li, D. Zeng, T. Zhang, L. Xu, J. Wang, Rinet: Relative importance-aware network
for fixation prediction, IEEE Transactions on Multimedia 25 (2023) 9263–9277.
[38] W. Wang, J. Shen, X. Dong, A. Borji, Salient object detection driven by fixation prediction, in:
Proceedings of the IEEE conference on computer vision and pattern recognition, 2018.
[39] Y. A. D. Djilali, K. McGuinness, N. O’Connor, Learning saliency from fixations, in: Proceedings of
the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024, pp. 383–393.