<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>ProfIT AI</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Artificial Intelligence-Driven Text-to-Tactile Graphics Generation for Visual Impaired People</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yehor Dzhurynskyi</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Volodymyr Mayik</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lyudmyla Mayik</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Lviv Polytechnic National University</institution>
          ,
          <addr-line>28a, Stepan Bandera Str., Lviv, 79013</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Ukrainian Academy of Printing</institution>
          ,
          <addr-line>19, Pid Holoskom Str., Lviv, 79020</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>4</volume>
      <fpage>25</fpage>
      <lpage>27</lpage>
      <abstract>
        <p>This research presents the development of a text-conditional tactile graphics generation model using the Bidirectional and Auto-Regressive Transformer (BART) and Vector Quantized Variational Auto-Encoder (VQ-VAE). The model leverages a modified organization of the latent space, divided into two independent components: textual and graphic. The study addresses the challenge of the limited availability of tactile graphics samples by expanding the training dataset with custom samples, enhancing the model's capability to convert textual information into graphical representations. The proposed method improves the creation of tactile graphics for visually impaired individuals, offering increased variability, controllability, and quality in synthesized tactile graphics. This advancement enhances both the technical and economic aspects of the production process for inclusive educational materials.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Artificial intelligence</kwd>
        <kwd>tactile graphics</kwd>
        <kwd>visual impairment</kwd>
        <kwd>natural language processing</kwd>
        <kwd>model</kwd>
        <kwd>machine learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The dynamics of modern inclusive society development emphasize the need to integrate people with
visual impairments into active social life. The problem of socializing individuals with visual
impairments involves various aspects that complicate their education, training, and full participation
in society [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Specifically, people with visual impairments have limited access to information, as
many materials are produced only in the usual printed or digital formats. This issue is further
exacerbated by the increasing prevalence of information in graphic form, designed for more effective
perception by readers. The aforementioned problems hinder the ability of individuals with visual
impairments to receive quality education and professional development [
        <xref ref-type="bibr" rid="ref2 ref3 ref4 ref5 ref7">2, 3, 4, 5, 7</xref>
        ].
      </p>
      <p>
        An analysis [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] of the activities of publishing and printing industry enterprises that produce
educational and methodological literature (textbooks, manuals, etc.) for people with visual
impairments revealed problems related to the creation or adaptation of images and illustrative
materials, which are particularly crucial for this type of publication. When creating or adapting
graphic materials, enterprises encounter the following issues: an insufficient number of trained
specialists with specific competencies related to the technical implementation of tactile graphics;
additional time and financial costs for training specialists; and the high labor intensity and cost of
the process of creating or adapting tactile graphics. Consequently, the production issues surrounding
tactile graphics remain one of the primary factors contributing to the low level of access to graphical
information for people with visual impairments.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Scientists are working to solve the problem of producing tactile graphics by developing models for
the automatic generation of tactile images [
        <xref ref-type="bibr" rid="ref10 ref11 ref8 ref9">8, 9, 10, 11, 12, 13</xref>
        ]. The task of most existing models is to
transform the content of a photo image into a tactile one.
      </p>
      <p>
        Models [
        <xref ref-type="bibr" rid="ref10 ref11 ref8 ref9">8, 9, 10, 11</xref>
        ] that attempt to directly convert the content of an image into a tactile format
usually utilize computer vision and have the following disadvantages: they violate the requirements
for tactile graphics [14, 15]; they display redundant elements of the image that are difficult to read
and interfere with the overall interpretation of the graphic material.
      </p>
      <p>In models [12, 13] whose principle of operation involves the detection and subsequent recognition
of individual image elements, replacing these elements with their tactile representations from a
limited sample, there is no variability in the synthesized image samples (it is impossible to synthesize
new samples, and the attractiveness of the synthesized image for people with visual impairments
decreases). Despite the mentioned drawback, it should be noted that the method effectively conveys
the content of the original photo at a high level in compliance with the requirements for tactile
graphics.</p>
      <p>Additionally, such methods require supplementary source graphic information (e.g.,
photographs), the search for or creation of which slows down the process of preparing material for
the production of tactile images.</p>
      <p>The development of information technologies, particularly in the field of deep machine learning,
has opened new opportunities for addressing the aforementioned problems. Recently, significant
advancements have been demonstrated by information technologies based on artificial intelligence
[16, 17, 18], which enable the generation of images based on user text prompts. However, according
to the analysis [19, 20], confirmed by a series of experiments, the information technologies built upon
these mathematical models have proven ineffective for creating tactile graphics. Despite this, the
concept of text-guided image generation was chosen as the foundation for this work.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Text-conditional tactile graphics generation model</title>
      <p>The text-conditional tactile graphics generation model is built upon the Bidirectional and
AutoRegressive Transformer (BART) [21] and Vector Quantized Variational Auto-Encoder (VQ-VAE)
[22]. The subject of its modeling is the process of converting text information into graphic
information. To do this, the embedded space of the transformer, which was formed during language
modeling on pretraining task, was divided into two independent embedded spaces: text and graphics,
instead of a shared one. At the same time, the parameters of the graphic embedded space were
adjusted so that the dimension of the embedded space was equal to the size of the "codebook" [22],
and the dimensionality of the vectors of the graphic embedded space was equal to the dimensionality
of the latent space vectors of the variational image synthesis model. The parameters of the text
embedded space remained the same as during language modeling.</p>
      <p>Before obtaining text tokens using the BPE [22, 23] tokenization model, the original text
components are normalized by bringing them to a uniform format (uppercase letters were converted
to lowercase letters).</p>
      <p>Formally, the process of converting text tokens into graphic tokens using a text-conditional tactile
graphics generation model is described in successive stages.</p>
      <p>The first step is to generate a bounded sequence of text tokens based on a text prompt: ̅ =
$%&amp;!"#,%, ̅ where is a sequence of text tokens of dimension '(),+ = 64;  is a dictionary
{! ∈ }!"#
of tokens. If the size of the generated sequence of text tokens exceeds the value, its size is reduced to
the maximum value, discarding the excess tokens. If the size of the generated sequence of text tokens
is smaller than the value, its size is increased to the maximum value by adding utility tokens 〈〉
that do not affect the simulation result.</p>
      <p>In the next step, the text tokens that form the sequence ̅ are mapped to the text embedded space
vectors , , forming a subset of it:
ℎ;;;+ = @2+A; ;ℎ;;+ ⊆ +,</p>
      <p>$%&amp;!"#,%; 2+ ⊆ +,
2+ = {,+ ∈ +4 = ! ∈ ̅}!"#
where ̅ is the sequence of text tokens; + is a text embedded space; ,+ are elements of the text
embedded space. Elements 2+ reflect the semantic meaning of text tokens in the embedded space.</p>
      <p>Next, the vectors of the text embedded space + are transformed by the transformer's bidirectional
encoder, which is formed from several layers, forming hidden states ;ℎ;;+. The bidirectionality of the
encoder means that it analyzes the full context of an individual vector of the embedded space,
considering both the previous and the following elements of the sequence:
(1)
(2)
(3)
(5)
(6)
̅ = {!}!2"&amp;#; ! =   ( ∘ (!-)),
where ̅ is the generated sequence of graphic tokens of size 0; !- ∈ ;;-; is an element of the
vector sequence of the graphic embedded space -.</p>
      <p>In the next step, on the basis of graphic tokens (5), a sequence of latent quantized vectors is formed
&amp;, which is defined by the formula (6). Each graphic token: !1 ≤ ! ≤ ;  = 1. . 0, is the
positional number of the quantized vector in the "codebook" of the VQ-VAE model:
&amp; = {, ∈ | = ! ∈ ̅}!2"&amp;#; &amp; ⊆ ,
where ℎ;;;+ is the hidden state of the encoder; (∙) is the transformer’s encoding operation
defined within [21].</p>
      <p>The hidden state of the encoder ;ℎ;;+ is then converted by linear layers and a nonlinear activation
function to the hidden state of the decoder (i.e., graphic information), forming a subset of the graphic
embedded space -:</p>
      <p>;ℎ;;-; = . ∘  ∘ #@ℎ;;;+A,
where ;ℎ;;+ ⊆ + is the hidden state of the encoder; ℎ;;;-; ⊆ - is the hidden state of the decoder;
! is a linear layer;  ≝ max (0, ) is a non-linear layer, activation function.</p>
      <p>At the next stage, an autoregressive [25, 26] transformer decoder is used. This means that the
decoder generates one graphics token per iteration, considering the context of the previously
generated graphics tokens. Thus, during the decoding process, the model performs calculations based
on the hidden state ;ℎ;;+ and pre-generated elements of the vector sequence of the graphic embedded
space ,-, or ;ℎ;;-;:
!- = @;ℎ;;+, # , .-, … , !-/#A;  ≤ 0,
(4)
where !- is the i-th element of the vector sequence of the graphic embedded space -; 1-;  &lt; 
are previously generated vectors of the graphic embedded space; ;ℎ;;-; is the hidden state of the
decoder; 0is the size of the final sequence ;;-; ⊆ -; (∙) is the transformer’s decoding
operation defined within [21].</p>
      <p>Decoding occurs in an iterative manner until the sequence ;;-; size is equal to 0 (i.e., the size of
the latent space vector of the VQ-VAE model).</p>
      <p>Once the decoding is complete, the resulting sequence of vectors of the graphics embedded space
;;-; is converted by a linear layer and  function into a sequence of probability distributions
from which the element with the highest probability is selected, determining the selected graphics
token:</p>
      <p>where  is the set of latent quantized vectors, or "codebook"; &amp; ⊆  is a sequence of latent
quantized vectors; ! ∈ ̅ is a graphic token; 0 is the size of the sequence of latent quantized
vectors.</p>
      <p>The final step is the synthesis of tactile graphics using a sequence-based variational image
synthesis model decoder (6):
 = @&amp;A,
(7)
where &amp; is the sequence of latent quantized vectors; (∙) is an image decoding
operation based on latent representation defined within [22];  is a generated tactile image.
The diagram of the text-conditional tactile graphics generation model is shown in Figure 1.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Experiment</title>
      <p>In this experiment, the proposed model was trained using the parameters presented in Tables 1 and
2 for the BART and VQ-VAE models, respectively. It is important to note that the size of the decoder’s
dictionary and the length of the sequence are each increased by one unit compared to the original
values. This adjustment is necessary to introduce an additional image service token (i.e., SOS token),
which is added at the beginning of the sequence to facilitate autoregressive image generation.</p>
      <p>The language modeling has been done using the BrUK corpus [27] consisting of Ukrainian texts
from different sources. Unlike textual datasets (i.e., corpora), which are widely accessible, tactile
graphics samples are much less common. A significant obstacle in modeling tactile graphics
generation using machine learning is the insufficient number of publicly available samples, as the
tactile graphics production industry is less prevalent compared to the traditional one.</p>
      <p>Nevertheless, a collection of plant and animal images stored in the APH Tactile Graphics Library
[28] was chosen as the original set of images for the model to learn to reproduce. Additionally, the
training dataset was expanded with 41 custom tactile image samples, increasing the total number of
samples to 179. The custom samples are formed from simple images of animals and were used at one
of the enterprises of Ukraine, which provides preschool education for children with visual
impairments.</p>
      <sec id="sec-4-1">
        <title>Value</title>
        <p>The results of the experiment include samples of generated tactile graphics images based on
various types of text prompts, such as monosyllabic prompts, prompts with numerals, and prompts
with epithets. These samples are presented in Figure 2.</p>
        <p>“a daisy”
“a cow”
“Top view of butterfly”
“a tree”
“three daisies”
“a spotted cow”
“a butterfly (side view)”
“a naked tree”
“a leaf”
“a dog”
“a turkey”
“a deer (side view)”</p>
        <p>The model's performance was evaluated separately for each component: BART and VQ-VAE. The
results of this evaluation are presented in Table 3. The Cross-Entropy metric reflects how well the
model converts text prompts into appropriate graphic tokens, and Perplexity represents the
uncertainty in the model's predictions. Lower values indicate better performance, meaning the model
is more confident in its generation process. For tactile graphics, FID measures how similar the
generated tactile images are to real ones in the latent space of the model. A lower FID score indicates
that the generated tactile graphics are closer to real tactile images in terms of visual and tactile
features.</p>
        <p>Additionally, the overall performance of the model was evaluated using the CLIP Score metric
[29], which reflects the model's capability in converting textual information into graphical
information. The average CLIP Score of the developed model is 23,7.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Limitations</title>
      <sec id="sec-5-1">
        <title>Metric</title>
        <p>Cross-Entropy
Perplexity
MSE (image space)
MSE (latent space)
FID
The current dataset used for training includes relatively simple images (e.g., animals, plants, basic
objects). One limitation of the model is its potential difficulty in scaling to more complex images,
such as those with intricate details (e.g., architectural blueprints, detailed scientific diagrams). The
model’s ability to capture fine details may be limited by the size of the latent space and the number
of hidden layers used in the VQ-VAE model. Complex tactile graphics might require a more
finegrained representation, which could lead to inefficiencies or inaccuracies in generation if the model
architecture remains unchanged.</p>
        <p>Besides, while the model performs well on simpler prompts (e.g., "a cow," "a tree"), more complex
and nuanced prompts (e.g., "a group of children playing soccer with a spotted ball") might pose
challenges. This is because the Transformer’s encoding of textual information becomes more
demanding as the semantic richness and length of the prompt increase. The model may struggle to
disentangle and appropriately represent all components of a complex scene in tactile graphics form,
leading to loss of information or oversimplification.</p>
        <p>Regarding computational requirements, the training process of the proposed model, which
integrates both the BART Transformer and the VQ-VAE, requires significant computational
resources. Due to the autoregressive nature of the model and the need to process both textual and
graphical latent spaces, training is computationally expensive. It requires powerful GPUs or TPUs,
large memory capacity, and extended training time, particularly as the dataset grows. This makes
scaling to larger datasets or higher-dimensional image outputs challenging without access to
advanced computing infrastructure.</p>
        <p>One of the key ethical concerns in the development of tactile graphics is ensuring that the
generated images do not misrepresent the information. For visually impaired users, the tactile
graphic is a primary means of understanding visual content, and any distortion or inaccuracy could
lead to misunderstandings. For example, if a generated tactile graphic oversimplifies or omits
important details, users might receive an incomplete or misleading representation of the intended
information. To mitigate this risk, it's important to validate the model outputs rigorously against
established standards for tactile graphics and seek feedback from visually impaired users to ensure
that the tactile representations are both accurate and understandable.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>As a result of this research, a text-conditional tactile graphics generation model was developed using
BART and VQ-VAE. The model employs a modified organization of the latent space, divided into
two independent components: textual and graphic.</p>
      <p>The method of creating tactile graphics for publications aimed at individuals with visual
impairments has been improved. This enhancement increases the variability, controllability, and
quality of synthesized tactile graphics, thereby improving the technical and economic aspects of the
production process.</p>
      <p>This technology can bridge the gap in access to educational materials, allowing visually impaired
individuals to better engage with subjects that rely heavily on visual content, such as science,
mathematics, and geography. The availability of automated tactile graphics can facilitate greater
independence in learning and enhance participation in inclusive classrooms and professional
environments.</p>
      <p>An important direction of further research is to increase the size and diversity of the training
sample to improve the general ability of the model to generalize and ensure its stable operation in
various scenarios.
[12] K. Pakėnaitė, P. Nedelev, E. Kamperou, M. Proulx and P. Hall, "Communicating Photograph
Content Through Tactile Images to People With Visual Impairments," Frontiers in Computer
Science, vol. 3, 2022.
[13] K. Pakenaite, E. Kamperou, M. J. Proulx, A. Sharma and P. Hall, "Pic2Tac: Creating Accessible
Tactile Images using Semantic Information from Photographs," in Proceedings of the Eighteenth
International Conference on Tangible, Embedded, and Embodied Interaction, Cork, 2024.
[14] Polish association of the blind, "Instructions for creating and adapting illustrations and
typhlographic materials for blind students," 2016.
[15] Braille Authority of North America &amp; Canadian Braille Authority, "Guidelines and Standards
for Tactile Graphics," 2022. [Online]. Available:
https://www.brailleauthority.org/guidelinesand-standards-tactile-graphics. [Accessed 20 April 2024].
[16] J. Oppenlaender, "The Creativity of Text-to-Image Generation," in Academic Mindtrek '22:</p>
      <p>Proceedings of the 25th International Academic Mindtrek Conference, New York, 2022.
[17] A. Ramesh, P. Dhariwal, A. Nichol, C. Chu and M. Chen, "Hierarchical Text-Conditional Image</p>
      <p>Generation with CLIP Latents," ArXiv, vol. abs/2204.06125, 2022.
[18] R. Rombach, A. Blattmann, D. Lorenz, P. Esser and B. Ommer, "High-Resolution Image Synthesis
with Latent Diffusion Models," in 2022 IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), New Orleans, Louisiana, 2022.
[19] Y. Dzhurynskyi and V. Mayik, "Preparation of illustrations for inclusive literature using artificial
intelligence models of image synthesis from text," Proceedings, vol. 66, no. 1, pp. 155-163, 2023.
[20] Y. Dzhurynskyi, "Generation of illustrations for inclusive literature using Midjourney artificial
intelligence model," in «Scientific method: reality and future trends of researching»: collection
of scientific papers «SCIENTIA» with Proceedings of the II International Scientific and
Theoretical Conference, Zagreb, 2023.
[21] Y. Yingchen, Z. Fangneng, W. Rongliang, P. Jianxiong, C. Kaiwen , L. Shijian, M. Feiying, X.</p>
      <p>Xuansong and M. Chunyan, "Diverse Image Inpainting with Bidirectional and Autoregressive
Transformers," arXiv, vol. abs/2104.12335, 2021.
[22] A. van den Oord, O. Vinyals and K. Kavukcuoglu, "Neural Discrete Representation Learning,"</p>
      <p>CoRR, vol. abs/1711.00937, 2017.
[23] Kulchytska, Kh., Semeniv, M., Kovalskyi, B., Pysanchyn, N., Selmenska, Z.: Influence of
Hadamard matrices canonicity on image processing. In: Hu, Z., Petoukhov, S., Yanovsky, F., He,
M. (eds.) ISEM ’21, LNCS, vol. 463, pp. 329–338. Springer, Cham (2022)
doi:10.1007/978-3-03103877-8_29
[24] V. Zouhar, C. Meister, J. Luis Gastaldi, L. Du, T. Vieira, M. Sachan and R. Cotterell, "A Formal
Perspective on Byte-Pair Encoding," in Findings of the Association for Computational
Linguistics: ACL 2023, Toronto, 2023.
[25] M. Dalal, A. C. Li and R. Taori, "Autoregressive Models: What Are They Good For?," CoRR, vol.</p>
      <p>abs/1910.07737, 2019.
[26] A. Graves, "Generating Sequences With Recurrent Neural Networks," CoRR, vol. abs/1308.0850,
2013.
[27] A. Rysin, "LanguageTool API NLP UK," 2022. [Online]. Available:
https://github.com/brownuk/nlp_uk. [Accessed 21 April 2024].
[28] American Printing House, "Tactile Graphic Image Library," [Online]. Available:
https://imagelibrary.aph.org/portals/aphb/#page/welcome. [Accessed 21 April 2024].
[29] J. Hessel, A. Holtzman, M. Forbes, R. Le Bras and Y. Choi, "CLIPScore: A Reference-free
Evaluation Metric for Image Captioning," CoRR, vol. abs/2104.08718, 2021.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <article-title>GBD 2019 Blindness and Vision Impairment Collaborators; Vision Loss Expert Group of the Global Burden of Disease Study. Trends in prevalence of blindness and distance and near vision impairment over 30 years: an analysis for the Global Burden of Disease Study, Lancet Glob Health (</article-title>
          <year>2021</year>
          )
          <fpage>e130</fpage>
          -
          <lpage>e143</lpage>
          . doi:
          <volume>10</volume>
          .1016/
          <fpage>S2214</fpage>
          -109X(
          <issue>20</issue>
          )
          <fpage>30425</fpage>
          -
          <lpage>3</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P.</given-names>
            <surname>Ackland</surname>
          </string-name>
          , Serge Resnikoff,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bourne</surname>
          </string-name>
          ,
          <article-title>World blindness and visual impairment: Despite many successes, the problem is growing</article-title>
          ,
          <source>Community Eye Health Journal</source>
          (
          <year>2018</year>
          )
          <fpage>71</fpage>
          -
          <lpage>73</lpage>
          . PMID:
          <volume>29483748</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>K.</given-names>
            <surname>Zebehazy</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Wilton</surname>
          </string-name>
          ,
          <article-title>"Graphic Reading Performance of Students with Visual Impairments and Its Implication for Instruction and Assessment,"</article-title>
          <source>Journal of Visual Impairment &amp; Blindness</source>
          , vol.
          <volume>115</volume>
          , pp.
          <fpage>215</fpage>
          -
          <lpage>227</lpage>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mukhiddinov</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Soon-Young</surname>
          </string-name>
          ,
          <article-title>"A Systematic Literature Review on the Automatic Creation of Tactile Graphics for the Blind</article-title>
          and
          <string-name>
            <given-names>Visually</given-names>
            <surname>Impaired</surname>
          </string-name>
          ,"
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bara</surname>
          </string-name>
          ,
          <article-title>"The Effect of Tactile Illustrations on Comprehension of Storybooks by Three Children with Visual Impairments: An Exploratory Study,"</article-title>
          <source>Journal of Visual Impairment &amp; Blindness</source>
          , vol.
          <volume>112</volume>
          , pp.
          <fpage>759</fpage>
          -
          <lpage>765</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>V.</given-names>
            <surname>Mayik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Dudok</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Mayik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Lotoshynska</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Izonin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kusmierczyk</surname>
          </string-name>
          ,
          <article-title>An Approach Towards Vacuum Forming Process Using PostScript for Making Braille</article-title>
          , in: Advances in Computer Science for Engineering and Manufacturing, Springer International Publishing,
          <year>2022</year>
          , pp.
          <fpage>38</fpage>
          -
          <lpage>48</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -03877-
          <issue>8</issue>
          _
          <fpage>4</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Dzhurynskyi</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Mayik</surname>
          </string-name>
          ,
          <article-title>"Analysis of the process of preparing illustrations for inclusive literature," Qualilogy of the book</article-title>
          , vol.
          <volume>41</volume>
          , pp.
          <fpage>7</fpage>
          -
          <lpage>15</lpage>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>T.</given-names>
            <surname>Way</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Barner</surname>
          </string-name>
          ,
          <article-title>"Towards Automatic Generation of Tactile Graphics,"</article-title>
          <source>Rehabilitation Engineering and Assistive Technology Society of North America</source>
          , pp.
          <fpage>161</fpage>
          -
          <lpage>163</lpage>
          ,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>T.</given-names>
            <surname>Way</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Barner</surname>
          </string-name>
          ,
          <article-title>"Automatic visual to tactile translation - Part I: Human factors, access methods, and image manipulation,"</article-title>
          <source>IEEE Transactions on Rehabilitation Engineering</source>
          , pp.
          <fpage>81</fpage>
          -
          <lpage>94</lpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>T.</given-names>
            <surname>Way</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Barner</surname>
          </string-name>
          ,
          <article-title>"Automatic visual to tactile translation. II. Evaluation of the TACTile image creation system,"</article-title>
          <source>IEEE Transactions on Rehabilitation Engineering</source>
          , pp.
          <fpage>95</fpage>
          -
          <lpage>105</lpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>T.</given-names>
            <surname>Ferro</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Pawluk</surname>
          </string-name>
          ,
          <article-title>"Automatic image conversion to tactile graphic,"</article-title>
          <source>in Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility</source>
          , Bellevue Washington,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>