<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>T. Hovorushchenko);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>describing impairments⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tetiana Hovorushchenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleg Voichur</string-name>
          <email>ovoichur@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Serhii Matiukh</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Artem Boyarchuk</string-name>
          <email>artem.boyarchuk@taltech.ee</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Olha Hovorushchenko</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Khmelnytskyi National University</institution>
          ,
          <addr-line>Institutska str., 11, Khmelnytskyi, 29016</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>National Pirogov Memorial Medical University</institution>
          ,
          <addr-line>Pirogova str., 56, Vinnytsya, 21018</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Tallinna Tehhnikaülikool</institution>
          ,
          <addr-line>Ehitajate tee 5, Tallinn, 12616</addr-line>
          ,
          <country country="EE">Estonia</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>This study aims to generate descriptions of art objects (which will later be converted into Braille using specialized software and audio recordings) using various artificial intelligence tools, in particular, the formation of prompts and AI-personas for describing art objects for people with visual impairments. The key advantage of the proposed method of forming AI-personas and prompts for describing art objects for people with visual impairments is its ability to continuously adapt to the needs and behavior of visitors, thus ensuring intuitive and comfortable interaction. This is especially important for taking into account gender differences in the perception (for example, women's focus on details and emotions, and men's focus on structure and dynamics) of art objects by people with visual impairments. Based on a comparative analysis of the descriptions of the painting generated by Gemini and ChatGPT for a blind woman and a blind man, it can be concluded that the descriptions created by Gemini show a stronger tendency towards analysis, compositional logic, and technical aspects of the painting, while the descriptions created by ChatGPT tend toward emotionality, integrity, and the creation of a personal connection. Thus, Gemini's description is slightly more effective at conveying the spatial depth, texture, and compositional logic of the painting, i. e., it is more suitable for generating descriptions of art objects for a blind man, while ChatGPT''s description is more effective at conveying the emotional state, intimacy, and overall atmosphere of the work, making it more suitable for generating descriptions of art objects for a blind woman.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;AI-person</kwd>
        <kwd>prompt</kwd>
        <kwd>artificial intelligence (AI)</kwd>
        <kwd>generative artificial intelligence</kwd>
        <kwd>description of art objects1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Global statistics on visual impairment show alarming trends: in 2015, more than 253 million people
worldwide lived with visual impairment (of whom 36 million were blind and 217 million had
moderate to severe visual impairment) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], and by 2020, this number had risen to 295 million
people (43.3 million blind and 251.7 million with severe visual impairment) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], and by 2050, the
total number of blind people and people with moderate to severe visual impairment is projected to
increase to 703 million [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. As for Ukraine, official statistics show that there are about 70,000 blind
citizens, but the actual number, according to unofficial data, may be three times higher,
highlighting the critical need for adaptive solutions.
      </p>
      <p>By ratifying the UN</p>
      <sec id="sec-1-1">
        <title>Convention on the Rights of Persons with Disabilities, Ukraine has</title>
        <p>
          committed itself (in particular, under Article 30) to ensuring full accessibility to cultural life,
including cinema, theaters, museums, and the adaptation of works for persons with visual
impairments, without infringing on copyright [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. Despite this, painting and visual arts remain
almost inaccessible to people with visual impairments. At the same time, the vast majority of blind
Ukrainians (66.9%) consider participation in cultural life to be important, and almost 62% are
convinced that the state should ensure equal rights in this area, which is fully in line with
international requirements and society's expectations [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
        </p>
        <p>
          The role of modern medical information technologies in the lives of people with visual
impairments is strategic — they act as a catalyst for social integration and active participation in all
aspects of public life [
          <xref ref-type="bibr" rid="ref4 ref5 ref6">4-6</xref>
          ]. The use of innovative solutions helps to significantly increase the level
of independence and improve the quality of life of this category of citizens, which is a necessary
condition for reducing barriers and ensuring equal opportunities in accordance with their needs [
          <xref ref-type="bibr" rid="ref7 ref8">7,
8</xref>
          ].
        </p>
        <p>
          Current attempts to adapt art for the blind are mostly limited to verbal descriptions (audio
guides) that convey only a general impression and composition, but do not allow for a spatial or
tactile understanding of the object [
          <xref ref-type="bibr" rid="ref10 ref11 ref9">9-11</xref>
          ]. While tactile models (which are expensive and
handmade) provide direct access to form and texture, they lack accompanying interactive textual
information [
          <xref ref-type="bibr" rid="ref12 ref13 ref14">12-14</xref>
          ]. Thus, there is a significant gap between the need for full sensory access to
artistic heritage and the capabilities of traditional adaptation methods.
        </p>
        <p>Since traditional methods cannot provide comprehensive, interactive, and scalable access, there
is a need to apply the latest information technologies. Artificial intelligence (AI) plays a decisive
role here, acting as a tool for automating complex multimodal transformations. A comparison of
the AI Trend Impact Radar reports for 2024 and 2025 (Fig. 1) confirms the continued priority of
areas such as “AI-Driven Multimodal Interaction” and “Generative AI for Content Adaptation.”
This trend indicates the advisability of moving from passive information delivery to AI-generated
adaptive, sensory formats that can bridge the identified gap by ensuring full integration of tactile
and audio data.</p>
        <p>Since artificial intelligence allows media resources to be adapted (creating automatic
descriptions, audio and tactile formats), the key task today is to create an accessible art space for
people with visual impairments. Accordingly, the urgent task is to create an accessible art space by
transforming 2D images into 3D models (tactile format) and then, using AI, generating descriptions
that will be converted into audio recordings (audio format) and Braille using specialized software.
Based on this, the development of a comprehensive information technology that automates the
process of converting 2D images into multisensory content (3D tactile + audio + Braille) using AI is
a new and relevant scientific and practical task that requires a multidisciplinary approach and has
no direct analogues among existing solutions.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Literature review</title>
      <p>Let's conduct a survey of known methods and tools for using verbal descriptions, tactile models,
and Braille descriptions, as well as the use of artificial intelligence methods and tools to describe art
objects for people with visual impairments.</p>
      <p>
        The authors [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] clearly state that audio description and accessible information sheets “cannot
convey a significant part of the spatial information” about a work of art. The study emphasizes that
tactile modality is best suited for understanding graphic images.
      </p>
      <p>
        The thesis [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] notes that, compared to 3D objects, 2D objects (paintings, maps) still pose
accessibility problems for the blind. It is emphasized that although audio provides a lot of
information, it is insufficient compared to the tactile information that a model or 3D object can
convey.
      </p>
      <p>
        Study [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] directly examined the combination of audio description and tactile elements. It
confirms that combining audio with touch is critical for better perception. This shows that audio
alone is insufficient.
      </p>
      <p>
        Research [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] proves that audio description is an auxiliary tool. The author emphasizes that
deep aesthetic awareness and perception of art occurs only when blind people have access to
threedimensional art or tactile perception of an object. Audio description only complements touch,
helping to organize and comprehend what is felt by touch.
      </p>
      <p>
        The authors [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] emphasize that tangible models, such as 3D printing, are a critical solution for
overcoming the inaccessibility of visual art in museums. The study focuses on multisensory
experience (multimodality) as a path to full perception of art.
      </p>
      <p>
        Article [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] focuses on empowering independence for blind museum visitors through the
introduction of improved interactive technologies. The authors emphasize that existing solutions
are often passive and do not allow users to independently and fully interact with exhibits.
      </p>
      <p>
        The authors of the review [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] point out that although tactile graphics (including 3D printing)
are necessary, the process of creating them is labor-intensive, especially for complex images. This
limits their widespread use. This review notes that “refreshable (dynamic) tactile displays” (which
could be interactive) remain prohibitively expensive, which excludes them from mass use. The
systematic review clearly classifies and analyzes numerous current solutions (in particular, using
computer vision and AI algorithms) for the automatic generation of tactile graphics (in particular,
3D models) from visual images. The review emphasizes that traditional methods are insufficient to
meet the semantic requirements of tactile perception.
      </p>
      <p>
        The authors [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] point out the main drawback of tactile materials — the need to use separate
Braille labels, which take up space and force the user to constantly “switch” between touching the
model and reading the text, disrupting the exploration and reading processes. This is the lack of
accompanying interactive information.
      </p>
      <p>
        The paper [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] clearly focuses on the shortcomings of individual methods (tactile models are
static, audio description does not convey space). The solution is interactive multimodal guides that
combine a sensory surface and localized audio description. This allows the user to independently
explore the object, receiving audio information about the specific area they are touching.
      </p>
      <p>
        Study [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] emphasizes that Braille text is only one solution and is not optimal for the entire
blind persons. It highlights the need for the collective use of various non-visual alternatives.
      </p>
      <p>
        In the context of maps (which are 2D information, like paintings), study [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] proves that an
interactive audio-tactile system is significantly more effective than traditional maps with Braille
labels. It is also mentioned that only a small percentage of blind people read Braille font (for
example, in France — only 15%).
      </p>
      <p>
        The authors [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] examine in detail the impact of 3D printing technologies on the reproduction
of cultural objects. They emphasize that the labor-intensive process of preparing geometric models
and the high cost/complexity of printing complex objects are among the main obstacles to ensuring
full accessibility to museums.
      </p>
      <p>
        The paper [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] describes the problems of creating authentic tactile replicas — traditional
methods (casting) are labor-intensive and risky for the originals, and 3D printing, although it
produces accurate copies, still requires significant time and resource costs. This justifies the need to
automate the process.
      </p>
      <p>The study [22] proves that an interactive audio-tactile tool provides significantly better recall
and cognitive mapping (formation of a mental image) compared to traditional static tactile maps.</p>
      <p>Although article [23] deals with mathematics, its conclusions are universal — the correction of
structural information in tactile form (3D) and its combination with audio significantly improves
the cognitive process and compensates for vision loss. This emphasizes that 3D models alone are
not enough — multimodal integration is needed.</p>
      <p>Research [24] conducted as part of the SHIFT project demonstrates how artificial intelligence
(AI), virtual reality (VR), and multisensory tools are transforming the cultural heritage experience.
It is noted that AI is used to automatically generate audio descriptions and integrate tactile
elements (with QR codes) with audio and Braille descriptions.</p>
      <p>The authors [25] present a multimodal system that uses conversational AI and other sensory
input to create a more engaging and personalized experience for museum visitors. The paper
emphasizes that the future lies in interactive, AI-enhanced systems rather than passive guides. The
article describes a multimodal system that uses conversational AI and other interactive channels
(gestures, audio) to provide a personalized and immersive experience of interacting with digital
reproductions of art. It demonstrates how AI goes beyond simple description to create lively,
adaptive communication.</p>
      <p>The authors [26] present an overview and investigation of the potential of deep learning
models, such as CNNs and GANs, for reconstructing 3D geometry from a single 2D image
(painting). The article confirms that to convert a painting into a tactile copy (which is necessary for
the blind), it is necessary to transform it into a 3D model, and AI is the key to automating this
process. It is emphasized that although AI is effective, post-processing and user interaction are
needed to improve the accuracy of 3D models.</p>
      <p>Article [27] proposes an AI-based TactileNet model that automates the generation of
highquality tactile graphics with high compliance with accessibility standards. The study emphasizes
that this approach reduces labor intensity and proves that AI can complement human experience in
creating adapted content.</p>
      <p>Review [28] confirms that the integration of AI in museums plays a central role in improving
accessibility. AI is used to personalize content (adapting it to interests and needs), generate audio
descriptions, and integrate with VR/AR to create more inclusive and educational opportunities for
visitors with various limitations. It is confirmed that AI is key to personalizing content and
increasing inclusivity. AI tools such as text-to-speech conversion and audio description generation
are critical for providing access to the blind, and the integration of AI with 3D printing creates
tactile experiences.</p>
      <p>Research [29] proposes an innovative method that uses tactile sensing together with AI-based
3D generation models (e.g., DreamFusion) to create realistic and detailed 3D objects. This
demonstrates how the integration of touch can improve the geometric accuracy of AI-generated 3D
models.</p>
      <p>An analytical article [30] provides an example of the use of technologies such as Aira, which
combines a smartphone/glasses and AI to provide real-time verbal descriptions. It also mentions
the use of AI to personalize routes and adapt content to individual needs.</p>
      <p>The video [31] demonstrates how AI-based technology converts 2D images into detailed 3D
models, allowing blind people to experience art through touch for the first time.</p>
      <p>Therefore, current attempts to adapt art for the blind are mostly limited to verbal descriptions
(audio guides) that convey only a general impression and composition, but do not allow for a
spatial or tactile understanding of the object, as research shows that audio description is unable to
effectively convey the spatial and structural characteristics of visual works. Tactile models provide
direct access to form and texture, but they have significant limitations — they are expensive and
labor-intensive to create (especially for complex objects), and their static nature requires the use of
separate Braille labels, which disrupts the process of continuous tactile exploration and does not
provide interactive accompaniment. This creates a significant gap that needs to be filled by
multisensory technologies. Combining 3D modeling with interactive information technologies is
the most relevant and effective way to ensure the accessibility of art and culture.</p>
      <p>So, this study aims to generate descriptions of art objects (which will later be converted into
Braille using specialized software and audio recordings) using various artificial intelligence tools, in
particular, the formation of prompts and AI-personas for describing art objects for people with
visual impairments.</p>
    </sec>
    <sec id="sec-3">
      <title>Method for creating AI-personas and prompts to describing art objects for people with visual impairments</title>
      <p>Before creating prompts and AI personas to describe art objects for people with visual
impairments, let's consider the gender differences in art perception.</p>
      <p>The focus of attention and interpretation of art may differ depending on gender, which is due to
a combination of biological, cognitive, and sociocultural factors [32-34]. These gender differences
influence aesthetic preferences and ways of perceiving visual art objects:</p>
      <p>Women tend to multitask and are better at perceiving context. They are more effective at
noticing small details, nonverbal cues, and emotions, as their attention is often socially
oriented toward interpersonal interactions and quickly detecting changes in the emotional
environment.</p>
      <p>Men demonstrate a stronger ability for tunnel vision (focusing on a single task) and spatial
orientation. They tend to focus more on global structures, logical connections, and
mechanical details (analytical approach), while showing less sensitivity to emotional and
social cues.






</p>
      <p>It is important to note that these characteristics are generalizations, and the final focus of
attention is always determined by individual experience, education, and personality traits [32-34].
However, given the key cognitive and emotional differences, men and women demonstrate different
perceptions of images, which affects their attention and interpretation of works of art [35, 36].</p>
      <p>Women tend to have a holistic, emotionally-oriented perception, focusing on [37-39]:
Emotional mood and atmosphere – analysis of the feelings evoked by the painting;
attention to color scheme, play of light, facial expressions, and interactions between
characters that convey sympathy, care, or tenderness.</p>
      <p>Small details and symbolism – interest in secondary elements, context, and hidden
meaning, especially related to subtle social or psychological messages and the inner world
of characters.</p>
      <p>Images of people and emotions – a tendency to read nonverbal cues (gestures, postures,
facial expressions), focusing on emotional connections and social roles.</p>
      <p>Harmony and aesthetics – an appreciation of the smoothness of lines, the softness of
transitions between shades, and the overall compositional balance.</p>
      <p>Color and texture – a more emotional response to the palette, attention to smooth
transitions of colors and textures that create a sense of depth and softness.</p>
      <p>Men tend to use an analytical, spatially-oriented approach, focusing on [37-39]:




</p>
      <p>Composition and structure – focus on scene construction, the logic of element placement,
and perspective; attention to structural details that reflect power, dynamics, or energy
(contrasts, tension, movement).</p>
      <p>Dynamics and movement – more often notice tension in poses, overall plot development,
and “active” aspects (lines, shapes, objects) that create images of strength, struggle, or
action.</p>
      <p>Contrasts and technique – attention is paid to the play of light and shadow, expressive
brushstrokes, clear, sharp lines, and rich, bright colors that enhance the visual impact.
Plot and logic of events – importance of understanding the plot, the connection between
characters and objects; focus on the global context and historical/cultural significance.
Object and activity orientation – greater inclination towards aspects related to activity,
movement, large spaces, power, and interaction with objects.</p>
      <p>These gender differences in perception show that adaptation methods (e.g., audio description or
tactile models) need to take these different focuses into account to ensure the most effective and
complete access to art.</p>
      <p>In article [40], the co-authors developed a method of preprocessing information for preparing
descriptions of art objects using artificial intelligence, which consists in developing a language
model/prompt that, based on input data (digital or 3D images of art objects), automatically
generates a personalized and emotionally charged text description. The main goal of this method is
to personalize text content, taking into account the individual cognitive, emotional, and cultural
characteristics of users, as well as their gender. The steps of this method include: semantic analysis
and classification of the image, identification of the target audience for further adaptation, selection
of the style and tone of the text, formation of the structure and content of the description, and
development of a language model for effective automatic generation.</p>
      <p>The prompt is a key factor in ensuring high-quality interaction with generative AI, as it controls
the quality of the output content, allowing the model to generate highly informative, logically
correct, and personalized responses that are precisely tailored to the specific context and user
requests. It is a well-formulated prompt, structured with these features in mind, that determines
the quality, logic, and personalization of the generated responses, ensuring clear and effective
communication for each user. The clear structure of the prompt allows AI to generate responses
that are not only logical and high-quality, but also personalized to take into account the gender
context (for example, by emphasizing emotional or, conversely, structural accents) for maximum
effectiveness of perception.</p>
      <p>The high-quality prompt should explain to generative AI who to be (role), what to do (task), and
what it needs to know (context). Based on this, the method of forming AI personas and prompts for
describing art objects for people with visual impairments consists of the following steps:</p>
      <p>Description of the AI persona — appoinrment of the role for artificial intelligence:
a. Description of the persona's role and goals (for example, “You are a researcher in the
field of information technology. Your goal is to help me find and review relevant
scientific articles,” etc.).
b. Description of experience and experts knowledge (e.g., “You are an expert in
information technology,” etc.).
c. Description of communication style and tone (e.g., formal and academic or encouraging
and supportive).
d. Description of specific instructions and restrictions (e.g., “Do not draw conclusions,
only state facts,” etc.).</p>
      <p>Description of the task — a clear definition of what the artificial intelligence should do.
Description of the context — providing the necessary background information.</p>
      <p>Description of the response format — defining the desired response style (text, list, table).</p>
      <p>Description of the tone of the response (formal, informal, persuasive, etc.).</p>
      <p>The key advantage of the proposed method of forming AI-personas and prompts for describing
art objects for people with visual impairments is its ability to continuously adapt to the needs and
behavior of visitors, thus ensuring intuitive and comfortable interaction. This is especially
important for taking into account gender differences in the perception (for example, womens’ focus
on details and emotions, and mens’ focus on structure and dynamics) of art objects by people with
visual impairments.</p>
      <p>The developed method provides control over the AI generation process, transforming it from a
simple text generator into a reliable, adaptive communication tool. Clearly defining the “task” and
providing comprehensive ‘context’ ensures that the AI response will be logically correct and
correspond to actual information about the art object, rather than being a “hallucination.”
Describing the “AI persona” and “specific instructions/restrictions” prevents the AI from deviating
from the topic, ensuring that the description remains focused on the artwork and accessibility
needs. Defining the “role,” “communication style,” and “tone” allows the AI model to tailor the
description to the individual needs of the user. This is critical for taking into account gender,
cognitive, or emotional differences in the perception of art objects by people with visual
impairments. A clear description of the desired “response tone” allows AI to generate an
emotionally charged description, which is important for conveying the aesthetic value of art that is
not usually conveyed by dry technical audio guides. Defining the “response format” ensures
consistency and ease of use of the resulting description. This is important for the subsequent
automatic conversion of text into Braille or audio recording. A structured, understandable
response, adapted to a specific style (academic, supportive, etc.), reduces the cognitive load on
visually impaired users, making the perception of art more intuitive and comfortable. Thus, this
method allows generative AI to be transformed into a reliable and controllable tool capable of
creating multimodally adapted content (audio, tactile models, Braille) based on personalized and
emotionally rich text descriptions.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Results &amp; discussion</title>
      <p>There are some examples of prompts for describing art objects for people with visual impairments,
formed according to the proposed method of forming prompts for describing art objects for people
with visual impairments.</p>
      <p>Let's start with prompt1, which will teach generative AI to describe an art object to the blind
woman:
1. Person: You are an art expert, a connoisseur of painting, and you want to help a blind
woman “see” Leonardo da Vinci's famous painting, the Mona Lisa. Your goal is to help the
blind woman imagine this painting. You are an expert in the field of art and also understand
the gender-specific characteristics of the perception of art objects. You must describe the
painting in an emotionally oriented style, taking into account the context and details of the
painting. Do not make conclusions, describe only what is in the painting, do not invent
anything of your own.
2. The task is to describe the painting so that a blind woman can imagine it.
3. Context — when describing the painting, take into account the fact that women tend to
have a holistic, emotionally-oriented perception and focus on the emotional mood and
atmosphere of the painting, small details and symbolism, images of people and emotions,
harmony and aesthetics, color and texture.
4. Response format — a text of 4-5 sentences.
5. Response tone — emotionally oriented.</p>
      <p>Let's run the created prompt1 in ChatGPT (the result of the prompt execution is shown in Fig. 2)
and Gemini (the result of the prompt execution is shown in Fig. 3) and compare the results (Table
1).
"Subtle smile plays on her lips, a
delicate, almost veiled expression that
suggests a deep, peaceful inner world</p>
      <p>and an enigma"
Creates a sense of deep calm and a
desire to unravel the mystery. May
seem a little more formal or
descriptive</p>
      <sec id="sec-4-1">
        <title>ChatGPT description</title>
        <p>More intimate and direct, focused
on presence and inner life
Begins with contact (gaze, smile)
and facial glow, then details the
background</p>
        <p>More focused on texture and
immediacy (“face glows softly in
warm light,” “seems to breathe with
hidden life,” “simple dark dress,
with fine folds and soft texture”).
Ends with a very sensory metaphor:
“quiet breath”
"Smile — calm, tender, and
mysterious — seems to breathe with
hidden life"</p>
      </sec>
      <sec id="sec-4-2">
        <title>Creates a sense of warm, close</title>
        <p>presence and emotional connection.</p>
        <p>Feels like a personal impression</p>
        <p>The description created by Gemini is more analytical and clearly distinguishes between the
figure (warm, earthy) and the background (cold, ethereal). The description created by ChatGPT is
more emotional and holistic. The metaphor of “quiet breath” at the end gives a wonderful,
comprehensive sense of the painting that can be very understandable without sight. It also
emphasizes that the woman's gaze meets the listener, creating a sense of personal communication.
Although the description created by Gemini is more detailed in terms of composition (hands,
contrast), the description created by ChatGPT is probably more effective in conveying the overall
feeling and intimacy of the Mona Lisa, as it focuses on her lively presence and emotional
connection.</p>
        <p>Now let's create prompt2, which will teach generative AI to describe an art object to the blind
man:
1. Person: You are an art expert, a connoisseur of painting, and you want to help a blind man
“see” Leonardo da Vinci's famous painting, the Mona Lisa. Your goal is to help the blind
man imagine this painting. You are an expert in the field of art and also understand the
gender-specific characteristics of the perception of art objects. You must provide an
analytical description of the painting, taking into account its structure, dynamics, and logic.
Do not draw conclusions, describe only what is in the painting, do not invent anything of
your own.
2. The task is to describe the painting so that the blind man can imagine it.
3. Context: when describing the painting, take into account the fact that men tend to have an
analytical, spatially-oriented perception and focus on composition and structure, dynamics
and movement, contrasts and technique, plot and logic of events, orientation towards the
object and activity.
4. Response format — a text of 4-5 sentences.
5. Response tone — spatially oriented.</p>
        <p>Let's run the created prompt2 in ChatGPT (the result of running the prompt is shown in Fig. 4)
and Gemini (the result of running the prompt is shown in Fig. 5) and compare the results (Table 2).</p>
        <p>Both texts convey the key visual and compositional characteristics of the painting, but do so
with different emphases. The description created by Gemini is slightly more effective for a blind
man, as it better explains the sfumato technique (smoothness, softness) in contrast to hard
structures (balustrade), giving a clearer tactile impression; it explains more clearly how
atmospheric perspective creates a deep, three-dimensional space behind the figure; its language is
more structural (“pyramidal figure,” “vertical axis,” “compositional logic”), which helps to create a
clear mental map of the painting. Both descriptions are qualitative, but the description created by
Gemini provides slightly more information about texture and depth, which are key to “spatial
touch”.</p>
        <p>Therefore, based on a comparative analysis of the descriptions of the painting generated by
Gemini and ChatGPT for a blind woman and a blind man, it can be concluded that the descriptions
created by Gemini show a stronger tendency towards analysis, compositional logic, and technical
aspects of the painting, while the descriptions created by ChatGPT tend toward emotionality,
integrity, and the creation of a personal connection. Thus, Gemini's description is slightly more
effective at conveying the spatial depth, texture, and compositional logic of the painting, i. e., it is
more suitable for generating descriptions of art objects for a blind man, while ChatGPT's
description is more effective at conveying the emotional state, intimacy, and overall atmosphere of
the work, making it more suitable for generating descriptions of art objects for a blind woman.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>The urgent task is to create an accessible art space by transforming 2D images into 3D models
(tactile format) and then, using AI, generating descriptions that will be converted into audio
recordings (audio format) and Braille using specialized software. Based on this, the development of
a comprehensive information technology that automates the process of converting 2D images into
multisensory content (3D tactile + audio + Braille) using AI is a new and relevant scientific and
practical task that requires a multidisciplinary approach and has no direct analogues among
existing solutions.</p>
      <p>This study aims to generate descriptions of art objects (which will later be converted into Braille
using specialized software and audio recordings) using various artificial intelligence tools, in
particular, the formation of prompts and AI-personas for describing art objects for people with
visual impairments.</p>
      <p>The key advantage of the proposed method of forming AI-personas and prompts for describing
art objects for people with visual impairments is its ability to continuously adapt to the needs and
behavior of visitors, thus ensuring intuitive and comfortable interaction. This is especially
important for taking into account gender differences in the perception (for example, women's focus
on details and emotions, and men's focus on structure and dynamics) of art objects by people with
visual impairments.</p>
      <p>Based on a comparative analysis of the descriptions of the painting generated by Gemini and
ChatGPT for a blind woman and a blind man, it can be concluded that the descriptions created by
Gemini show a stronger tendency towards analysis, compositional logic, and technical aspects of
the painting, while the descriptions created by ChatGPT tend toward emotionality, integrity, and
the creation of a personal connection. Thus, Gemini's description is slightly more effective at
conveying the spatial depth, texture, and compositional logic of the painting, i. e., it is more
suitable for generating descriptions of art objects for a blind man, while ChatGPT's description is
more effective at conveying the emotional state, intimacy, and overall atmosphere of the work,
making it more suitable for generating descriptions of art objects for a blind woman.</p>
      <sec id="sec-5-1">
        <title>Acknowledgements</title>
        <p>The authors would like to thank the EACEA and the ERASMUS+ SMART-PL project for the idea,
inspiration and equipment that made this work possible, as well as for the wonderful and useful
two-day workshop “Using Artificial Intelligence and AI Personas”, which took place at
WrocławTech on September 2025.</p>
      </sec>
      <sec id="sec-5-2">
        <title>Declaration on Generative AI</title>
        <p>During the preparation of this work, the authors used Grammarly in order to: grammar and spelling
check; DeepL Translate in order to: some phrases translation into English; ChatGPT and Gemini in
order to: conduct experiments as a prompt-based tool for creating automated descriptions of art
objects. After using these tools/services, the authors reviewed and edited the content as needed and
take full responsibility for the publication’s content.
[22] E. Griffin, L. Picinali, M. Scase, The effectiveness of an interactive audio‐tactile map for the
process of cognitive mapping and recall among people with visual impairments, Brain Behav.
10.7 (2020). doi:10.1002/brb3.1650.
[23] M. Maćkowski, M. Kawulok, P. Brzoza, M. Janczy, D. Spinczyk, An Alternative Audio-Tactile
Method of Presenting Structural Information Contained in Mathematical Drawings Adapted to
the Needs of the Blind, Appl. Sci. 13.17 (2023) 9989. doi:10.3390/app13179989.
[24] Enhancing Accessibility Through Multisensory AI Experiences, 2025. URL:
https://www.bmuseums.net/enhancing-accessibility-through-multisensory-ai-experiences/.
[25] A. Ferracani, S. Ricci, F. Principi, G. Becchi, N. Biondi, A. Del Bimbo, M. Bertini, P. Pala, An
AIPowered Multimodal Interaction System for Engaging with Digital Art: A Human-Centered
Approach to HCI, in: Lecture Notes in Computer Science, Springer Nature Switzerland, Cham,
2025, pp. 281–294. doi:10.1007/978-3-031-93418-6_19.
[26] R. Furferi, Deep Learning Approaches for 3D Model Generation from 2D Artworks to Aid</p>
        <p>Blind People with Tactile Exploration, Heritage 8.1 (2024) 12. doi:10.3390/heritage8010012.
[27] A. Khan, A. Choubineh, M. A. Shaaban, A. Akkasi, M. Komeili, TactileNet: Bridging the
Accessibility Gap with AI-Generated Tactile Graphics for Individuals with Vision Impairment,
2025. URL: https://arxiv.org/pdf/2504.04722v2.
[28] V. Muto, S. Luongo, F. Sepe, A. Prisco, Enhancing Visitors' Digital Experience in Museums
through Artificial Intelligence, 2024. URL:
https://www.iris.unina.it/retrieve/9d50d1a81fab-4bbd-b0a7-e31d0b6255c4/
Enhancing%20Visitors_%20Digital%20Experience%20in%20Museums%20through%20Artificial%
20Intelligence.pdf.
[29] R. Gao, K. Deng, G. Yang, W. Yuan, J.-Y. Zhu, Tactile DreamFusion: Exploiting Tactile Sensing
for 3D Generation, 2024. URL: https://ruihangao.github.io/TactileDreamFusion/.
[30] 5 Ways AI Makes Art More Accessible to Everyone, 2025. URL:
https://www.museumfy.com/blog/5-ways-ai-makes-art-more-accessible-to-everyone.
[31] Touch: Beyond Vision - Bringing Art to Life for the Visually Impaired, 2025.</p>
        <p>URL:https://www.youtube.com/watch?v=rBayYnf56_k.
[32] J. M. Hansen, T. Roald, Aesthetic Empathy: An Investigation in Phenomenological Psychology
of Visual Art Experiences, Journal of Phenomenological Psychology (2022).
[33] [R. M. Rodriguez-Boerwinkle, M. J. Boerwinkle, P. J. Silvia, The Open Gallery for Arts
Research (OGAR): An open-source tool for studying the psychology of virtual art museum
visits, Behav. Res. Methods (2022). doi:10.3758/s13428-022-01857-w.
[34] K. Oatley, M. Djikic, Psychology of Narrative Art, Rev. Gen. Psychol. 22.2 (2018) 161–168.</p>
        <p>doi:10.1037/gpr0000113.
[35] M. Skov, M. Nadal, A Farewell to Art: Aesthetics as a Topic in Psychology and Neuroscience,</p>
        <p>Perspect. Psychol. Sci. 15.3 (2020) 630–642. doi:10.1177/1745691619897963.
[36] M. Orr, Towards a feminist revisionism of an aesthetics of mastery, in: Reading, writing and
the influence of Harold Bloom, Manchester University Press, 2024.
doi:10.7765/9781526186027.00014.
[37] N. A. Michna, Feminist aesthetics: then and now – reflections on thirty-five years of inquiry in
the US tradition, Fem. Theory (2024). doi:10.1177/14647001241284969.
[38] S. Cefai, Feminist Aesthetics of Resistance, in: The Routledge Companion to Gender and</p>
        <p>Affect, Routledge, London, 2022, с. 227–236. doi:10.4324/9781003045007-25.
[39] D. Harris, Indigenous Feminist Aesthetic Work as Cultural Revitalization: Facilitating
Uy’Skwuluwun, in: Feminism, Adult Education and Creative Possibility, Bloomsbury
Academic, 2022. doi:10.5040/9781350231078.ch-11.
[40] O. Voichur, O. Hovorushchenko, A. Boyarchuk, Y. Voichur, A. Nester, Method of
preprocessing information for preparing a description of art objects using artificial
intelligence, CEUR-WS 3963 (2025) 1-14.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>P.</given-names>
            <surname>Ackland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Resnikoff</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bourne</surname>
          </string-name>
          .
          <article-title>World blindness and visual impairment: Despite many successes, the problem is growing</article-title>
          .
          <source>Community Eye Health Journal</source>
          (
          <year>2018</year>
          )
          <fpage>71</fpage>
          -
          <lpage>73</lpage>
          . PMID:
          <volume>29483748</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <article-title>GBD 2019 Blindness and Vision Impairment Collaborators; Vision Loss Expert Group of the Global Burden of Disease Study. Trends in prevalence of blindness and distance and near vision impairment over 30 years: an analysis for the Global Burden of Disease Study</article-title>
          .
          <source>Lancet Glob Health</source>
          (
          <year>2021</year>
          )
          <fpage>e130</fpage>
          -
          <lpage>e143</lpage>
          . doi:
          <volume>10</volume>
          .1016/
          <fpage>S2214</fpage>
          -109X(
          <issue>20</issue>
          )
          <fpage>30425</fpage>
          -
          <lpage>3</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <article-title>[3] ART FOR ALL: THE SITUATION WITH THE OBSERVANCE OF CULTURAL RIGHTS OF PEOPLE WITH DISABILITIES IN UKRAINE. Analytical report based on the results of the allUkrainian survey "Opinions and Views of the Population of Ukraine" (Omnibus) in September 2021</article-title>
          . URL: https://ffr.org.ua/wp-content/uploads/2022/10/Mystetstvo-dlya-vsih_
          <article-title>-sytuatsiya-zdotrymannyam-kulturnyh-prav-lyudej-z-invalidnistyu-v-Ukrayini.pdf_</article-title>
          .pdf. [in Ukrainian]
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hnatchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hovorushchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Pavlova</surname>
          </string-name>
          ,
          <article-title>Methodology for the development and application of clinical decisions support information technologies with consideration of civillegal grounds</article-title>
          ,
          <source>Radioelectron. Comput. Syst. No. 1</source>
          (
          <year>2023</year>
          )
          <fpage>33</fpage>
          -
          <lpage>44</lpage>
          . doi:
          <volume>10</volume>
          .32620/reks.
          <year>2023</year>
          .
          <volume>1</volume>
          .03.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>T.</given-names>
            <surname>Hovorushchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Moskalenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Osyadlyi</surname>
          </string-name>
          ,
          <article-title>Methods of medical data management based on blockchain technologies</article-title>
          ,
          <source>J. Reliab. Intell. Environ</source>
          . (
          <year>2022</year>
          ).
          <source>doi:10.1007/s40860-022-00178-1.</source>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>T.</given-names>
            <surname>Hovorushchenko</surname>
          </string-name>
          , Ye. Hnatchuk,
          <string-name>
            <given-names>A.</given-names>
            <surname>Herts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Moskalenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Osyadlyi</surname>
          </string-name>
          ,
          <article-title>Theoretical and Applied Principles of Information Technology for Supporting Medical Decision-Making Taking into Account the Legal Basis</article-title>
          ,
          <source>CEUR-WS</source>
          <volume>3038</volume>
          (
          <year>2021</year>
          )
          <fpage>172</fpage>
          -
          <lpage>181</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Hovorushchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Herts</surname>
          </string-name>
          , Ye. Hnatchuk,
          <article-title>Concept of Intelligent Decision Support System in the Legal Regulation of the Surrogate Motherhood</article-title>
          ,
          <source>CEUR-WS</source>
          <volume>2488</volume>
          (
          <year>2019</year>
          )
          <fpage>57</fpage>
          -
          <lpage>68</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>T.</given-names>
            <surname>Hovorushchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Herts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hnatchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Sachenko</surname>
          </string-name>
          ,
          <article-title>Supporting the Decision-Making About the Possibility of Donation and Transplantation Based on Civil Law Grounds</article-title>
          ,
          <source>in: Advances in Intelligent Systems and Computing</source>
          , Springer International Publishing, Cham,
          <year>2020</year>
          , pp.
          <fpage>357</fpage>
          -
          <lpage>376</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -54215-3_
          <fpage>23</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>L.</given-names>
            <surname>Cavazos Quero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Iranzo</given-names>
            <surname>Bartolomé</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Cho</surname>
          </string-name>
          ,
          <article-title>Accessible Visual Artworks for Blind and Visually Impaired People: Comparing a Multimodal Approach with Tactile Graphics</article-title>
          ,
          <source>Electronics 10.3</source>
          (
          <year>2021</year>
          )
          <article-title>297</article-title>
          . doi:
          <volume>10</volume>
          .3390/electronics10030297.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Investigating Technologies to Enrich Museum Audio Description for Enhancing Accessibility</article-title>
          , New Voices in
          <source>Translation Studies</source>
          <volume>26</volume>
          (
          <issue>1</issue>
          ) (
          <year>2022</year>
          ). doi:
          <volume>10</volume>
          .14456/nvts.
          <year>2022</year>
          .
          <volume>17</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Djoussouf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Romeo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chottin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Thompson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. F.</given-names>
            <surname>Eardley</surname>
          </string-name>
          ,
          <article-title>Inclusion for Cultural Education in Museums, Audio and Touch Interaction</article-title>
          , in: Assistive Technology:
          <article-title>Shaping a Sustainable and Inclusive World</article-title>
          , IOS Press,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .3233/shti230663.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>E.</given-names>
            <surname>Niestorowicz</surname>
          </string-name>
          ,
          <article-title>Tactile Perception of a Bas-relief. Audio Description as a Means to Make Art Available to the Blind</article-title>
          .
          <source>A Case Study</source>
          ,
          <year>2017</year>
          . URL: https://cejsh.icm.edu.pl/cejsh/element/bwmeta1.element.desklight-6f89170a
          <string-name>
            <surname>-</surname>
          </string-name>
          273f
          <string-name>
            <surname>-</surname>
          </string-name>
          4576
          <string-name>
            <surname>-</surname>
          </string-name>
          a9e7- d3f4b8d774ff/c/263-276-Logopedia-46
          <string-name>
            <surname>-2017-ANG-</surname>
          </string-name>
          Niestorowicz-Ewa.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>D.</given-names>
            <surname>Reinhardt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Holloway</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Thogersen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Guerry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. A. C.</given-names>
            <surname>Diaz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Havellas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Poronnik</surname>
          </string-name>
          ,
          <article-title>The Museum of Touch: Tangible Models for Blind and Low Vision Audiences in Museums</article-title>
          , in: Multimodality in Architecture, Springer Nature Switzerland, Cham,
          <year>2024</year>
          , pp.
          <fpage>135</fpage>
          -
          <lpage>155</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -49511-
          <issue>3</issue>
          _
          <fpage>8</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>T. Z.</given-names>
            <surname>Nasser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kuflik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Danial-Saad</surname>
          </string-name>
          ,
          <article-title>Empowering Independence for Visually Impaired Museum Visitors Through Enhanced Accessibility</article-title>
          ,
          <source>Sensors</source>
          <volume>25</volume>
          .15 (
          <year>2025</year>
          )
          <article-title>4811</article-title>
          . doi:
          <volume>10</volume>
          .3390/s25154811.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mukhiddinov</surname>
          </string-name>
          , S.-
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A Systematic</given-names>
            <surname>Literature</surname>
          </string-name>
          <article-title>Review on the Automatic Creation of Tactile Graphics for the Blind</article-title>
          and
          <source>Visually Impaired, Processes</source>
          <volume>9</volume>
          .10 (
          <year>2021</year>
          )
          <article-title>1726</article-title>
          . doi:
          <volume>10</volume>
          .3390/pr9101726.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M.</given-names>
            <surname>Raynal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ducasse</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. J. M. Macé</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Oriola</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Jouffrais</surname>
          </string-name>
          , The FlexiBoard:
          <article-title>Tangible and Tactile Graphics for People with Vision Impairments, Technol</article-title>
          .
          <source>Interact. 8</source>
          .
          <issue>3</issue>
          (
          <year>2024</year>
          )
          <article-title>17</article-title>
          . doi:
          <volume>10</volume>
          .3390/mti8030017.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>C. F.</given-names>
            <surname>Andrade</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. V. F.</given-names>
            <surname>Pimenta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Eliseo</surname>
          </string-name>
          ,
          <article-title>Enhancing Art Accessibility for Visually Impaired Individuals through Multisensory Technologies</article-title>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Interact</surname>
          </string-name>
          .
          <source>Syst. 16.1</source>
          (
          <year>2025</year>
          )
          <fpage>805</fpage>
          -
          <lpage>816</lpage>
          . doi:
          <volume>10</volume>
          .5753/jis.
          <year>2025</year>
          .
          <volume>5162</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Dolphin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Downing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cirrincione</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Samuta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Leite</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Noble</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Walsh</surname>
          </string-name>
          ,
          <article-title>Information Accessibility in the Form of Braille</article-title>
          , IEEE Open J.
          <source>Eng. Med. Biol</source>
          . (
          <year>2024</year>
          )
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          . doi:
          <volume>10</volume>
          .1109/ojemb.
          <year>2024</year>
          .
          <volume>3364065</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>A.</given-names>
            <surname>Brock</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Jouffrais</surname>
          </string-name>
          ,
          <article-title>Interactive audio-tactile maps for visually impaired people</article-title>
          ,
          <source>ACM SIGACCESS Access. Comput. No. 113</source>
          (
          <year>2015</year>
          )
          <fpage>3</fpage>
          -
          <lpage>12</lpage>
          . doi:
          <volume>10</volume>
          .1145/2850440.2850441.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>M.</given-names>
            <surname>Papis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kalski</surname>
          </string-name>
          , G. Szuszkiewicz,
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Kowalik</surname>
          </string-name>
          ,
          <article-title>Influence of 3D printing technology on reproducing cultural objects in the context of visually impaired people</article-title>
          ,
          <source>Adv. Sci. Technol. Res. J. 19.6</source>
          (
          <year>2025</year>
          )
          <fpage>121</fpage>
          -
          <lpage>130</lpage>
          . doi:
          <volume>10</volume>
          .12913/22998624/202248.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>P. F.</given-names>
            <surname>Wilson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Stott</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Warnett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Attridge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Williams</surname>
          </string-name>
          ,
          <source>Evaluation of Touchable 3D-Printed Replicas in Museums, Curator 60.4</source>
          (
          <year>2017</year>
          )
          <fpage>445</fpage>
          -
          <lpage>465</lpage>
          . doi:
          <volume>10</volume>
          .1111/cura.12244.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>