<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Statistical translation method for Ukrainian Sign Language</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Olga Lozynska</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Lviv Polytechnic National University</institution>
          ,
          <addr-line>Lviv, Stepan Bandera 12, 79000</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The paper examines the main problems that arise when translating Ukrainian Sign Language and the differences between Ukrainian Sign Language and Ukrainian Written Language. The wellknown methods of machine translation of sign languages are described, in particular, machine translation based on rules, statistical machine translation, machine translation based on ontologies, and the relatively new neural machine translation. A study was conducted on the application of statistical machine translation, namely the application of the IBM #1 model and the EM algorithm for aligning words in the sentences of the corpus of parallel texts "Ukrainian Written Language - Ukrainian Sign Language". Examples of the application of the statistical method of translation are given and the main results are described. Alignment matrices for sentence structures of the same type are generalized.</p>
      </abstract>
      <kwd-group>
        <kwd>statistical machine translation</kwd>
        <kwd>Ukrainian Sign Language</kwd>
        <kwd>Ukrainian Written Language</kwd>
        <kwd>corpus of parallel sentences</kwd>
        <kwd>EM algorithm1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>More than 44,000 people with hearing impairment are registered with the All-Ukrainian
public organization of the disabled "Ukrainian Society of the Deaf". The World Federation
of the Deaf (WFD) unites more than 70 million deaf people around the world and 135
national associations of the deaf.</p>
      <p>
        People who communicate in sign language should be provided with comfortable access
to modern information resources. To achieve this goal, it is necessary to solve the difficult
task of translating sign language into a corresponding text record. Sign language (SL) is a
natural language with a grammatical structure and vocabulary that differs from written
language [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>As a result of the growing number of applied research, the latest technologies are being
used to improve the situation of people with physical disabilities. Therefore, the
development of methods and means of translating sign language into text is a promising
direction of research. For free communication with the deaf, it is enough to develop a system
of translating sign language into text and vice versa.</p>
      <p>Ukrainian Sign Language is a natural way of communication for deaf Ukrainians and is
an integral part of their personality. The sequence of words in a sign language sentence is
different from written language. Despite the lack of prepositions and cases, Ukrainian Sign
Language is a multi-level linguistic system that has a wide range of lexical and grammatical
tools for expressing opinions and analyzing information. Just like spoken language, sign
language changes over time: new gestures appear, foreign ones are borrowed.</p>
      <p>Tracing Sign Language literally reproduces the sentences of the written language.
Gestures are used to show the root of a word, and dactyl is used to show prepositions,
prefixes, suffixes, and endings. Usually, tracing sign language is used for dictation. However,
the Ukrainian Sign Language is the main language in the communication of people with
hearing impairments, it is much easier to understand in contrast to the tracing sign
language.</p>
      <p>
        The task of translating Ukrainian Sign Language (USL) into Ukrainian Written Language
(UWL) belongs to the tasks of computer translation. The following well-known computer
translation methods are used for sign language translation: rule-based translation,
statistical-based translation, ontology-based machine translation, phrase-based machine
translation, neural machine translation method. Translation programs based on rules
analyze the text and build its translation based on built-in dictionaries and a set of rules for
a given language pair [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Statistical translation uses the principle of statistical analysis:
large volumes of texts (millions of words) in the original language and their human
translations are loaded into the program [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The program, analyzing the statistics of
crosslanguage correspondences and syntactic constructions, selects the best translation option.
Ontology-based machine translation of sign language implemented ontology on the sign
language domain to solve some sign language challenges [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Statistical machine translation
systems based on phrase-based models translate small word sequences at a time [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Neural
machine translation method based on neural networks. It was allowed to combine the
alignment and translation to and from multiple languages, even creating multilingual
models. Nevertheless, this method requires the use of large data sets, making small datasets
unusable [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>USL has its own language organization and is considered according to the main
provisions of structural linguistics. National Sign Languages are independent of the
corresponding sound languages and have their own history and structure. USL consists of
three main functional-structural components: kinetics (the composition of kinem),
vocabulary (a set of gestures), grammar (a set of rules and means for their implementation).</p>
      <p>The problem of computer translation of Ukrainian Sign Language into Ukrainian Written
Language is caused by the fact that Ukrainian Sign Language is a language without a written
form. Therefore, in order to translate USL into the Ukrainian Written Language, it is
necessary to create a certain written record for USL. Since Ukrainian Sign Language is a
means of communication in which only visual-kinetic means are used to convey information
(hand gestures, lip articulation, facial expressions and emotions), all these features must be
taken into account for the translation of USL into written language.</p>
      <p>The translation of Ukrainian Sign Language is a complex task, which includes the analysis
of the grammar of USL, the construction of rules for the translation of Ukrainian Sign
Language into text and vice versa.</p>
      <p>The main problems of computer translation from Ukrainian Sign Language into written
Ukrainian are:
•
•
•
•</p>
      <sec id="sec-1-1">
        <title>Sign language grammar differs from written language; In sign language, the order of words in a sentence is of great importance; The number of words in sign language does not correspond to the number of words in written language;</title>
        <p>Use of the dactyl alphabet, pointing gestures, transliteration of proper names and
terms along with gestures.</p>
        <p>To develop a system of statistical machine translation from Ukrainian Sign Language into
Ukrainian Written Language and vice versa, the following steps should be taken into
account:</p>
      </sec>
      <sec id="sec-1-2">
        <title>1. Sign recognition using a recognition device;</title>
        <p>2. Recording of recognized signs into gloss notation;
3. Statistical machine translation of glosses into text and vice versa;
4. Transforming glosses into a conversational model;
5. Reproduction of signs by the avatar.</p>
        <p>In this article, we consider only a part of the translation system, namely statistical
machine translation.</p>
        <p>The main steps of such a system are schematically shown in Figure 1.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        There are many scientists in whole world are engaged in researching the problems of
translation of foreign sign languages, for example, S. Morrissey and A. Way [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], D. Stein, P.
Dreuw, G. Ney (for English Sign Language) [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], R. San-Segundo, A. Pérez, D. Ortiz, L.F. D'Haro,
M. I. Torres, F. Casacuberta) [10] (for Spanish sign language), J. Bungeroth [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] (for German
sign language). We have studied statistical translation methods [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], rule-based
translation methods [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], phrase-based translation methods [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>Advances in statistical machine translation allow it to be used for automatic sign
language translation. In the work of scientists A. Otman and M. Jemni [11], [12] statistical
machine translation for English Sign Language is considered. Examples of the use of IBM
models #1-3 for translation from English Written Language to English Sign Language, as
well as the EM algorithm for word alignment are given.</p>
      <p>
        The work [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] examines the application of statistical machine translation for the creation
of a system for translating German Sign Language into written German, describes the
architecture of this translation system, and provides translation results. Such a system is
formed with the help of a bilingual corpus. This corpus contains 200 sentences, of which
167 sentences are training data and 33 sentences are test data. Training is performed using
various statistical models, such as IBM-Models #1-4 and Hidden Markov models.
      </p>
      <p>Despite the fact that linguistic studies of Ukrainian Sign Language have been carried out
by scientists for a long time, there is still no complete description of the grammar of USL
and the basic principles of USL translation.</p>
      <p>Ukrainian scientists Yu. V. Krak, O.V. Barmak et al. from V.M. Glushkov Institute of
Cybernetics of the National Academy of Sciences [13] proposed information technology for
modeling the Ukrainian Sign Language. During the implementation of the technology,
scientists faced problems related to the presence of two sign languages of communication
tracing and Ukrainian Sign Languages, the fact that there is no unambiguous
correspondence of words to signs, as well as the lack of programs for the grammatical
analysis of Ukrainian language sentences. In paper [14] new tools of alternative
communication for persons with verbal communication disorders are described.</p>
      <p>
        In scientific work [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] the mathematical method for translation into Ukrainian Sign
Language based on ontologies are described. The authors used weighted affix context-free
grammar parser (WACFG) for sentence parsing that allowed to increase the percentage of
correctly translated sentences. The transformation algorithm from constituency tree into
dependency tree was developed. It has shown high efficiency (89% correct sentences
converted) and the possibility of its use in machine translation systems.
      </p>
      <p>
        Rule-based machine translation into Ukrainian Sign Language using concept dictionary
are described in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The authors identified five main cases of relationships between words,
signs and concepts used for translating Ukrainian Sign Language. They proposed an
algorithm for translation from Ukrainian Spoken Language to Ukrainian Sign Language
based on concepts. This algorithm was tested using database of 360 sentences, which
contained 60 concepts. As a result, 87% of sentences were translated correctly, 32% of
which contained concepts, 13% were not translated due to the lack of word to sign
correspondence.
      </p>
      <p>
        In paper [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] authors presented a neural machine translation method from Japanese
Spoken Language to Japanese Sign Language glosses. They used a pre-trained model as the
initial model of the encoder, and confirmed that the method works well, especially in
smalltraining-data situations. The training data were about 130,000 sentence pairs and BLEU
scores for this method was 24.24. The other methods including phrase based statistical
machine translation had a BLEU score of 23.96.
      </p>
      <p>The scientific work [15] are described the first evaluation of the quality of automatic
translation between Myanmar sign language (MSL) and Myanmar written text, in both
directions. The authors proposed three different statistical machine translation (SMT)
approaches: phrase-based, hierarchical phrase-based, and the operation sequence model.
They are used for this translation methods MSL-Myanmar parallel corpus. The scientists
used three different segmentation schemes: syllable segmentation, word segmentation and
sign unit-based word segmentation. The results show that the highest quality machine
translation was attained with syllable segmentations for both MSL and Myanmar written
text.</p>
      <p>In article [11] the authors described statistical machine translation of written English
text to sign language. The scientists proposed a novel approach to build artificial corpus
using grammatical dependencies rules owing to the lack of resources for sign language. The
parallel corpus was the input of the statistical machine translation, which was used for
creating statistical memory translation based on IBM alignment algorithms. These
algorithms were enhanced and optimized by integrating the Jaro–Winkler distances in
order to decrease training process. Subsequently, based on the constructed translation
memory, a decoder was implemented for translating English text to the ASL using a novel
proposed transcription system based on gloss annotation. The results were evaluated using
the BLEU evaluation metric.</p>
      <p>An overview of known methods of Sign Language Translation (SLT) is described in the
work [16]. Authors also describes a review about the possible tasks related to SLs, the
metrics used for the generated glosses and spoken language text and a summary of all the
available public datasets and whether they are suitable for the SLT task or not. Moreover,
the survey lists the challenges that need to be tackled within the SLT research and also for
the adoption of SLT technologies, and proposes future research lines.</p>
      <p>In paper [17] authors discuss machine translation from sign to spoken languages. The
neural machine translation for Sign Language Translation is investigated. Describes the
main problem of neural machine translation, in particular, small data sets for translation.
Many datasets consider limited domains of discourse and generally contain recordings of
non-native signers. This has implications on the quality and accuracy of translations
generated by models trained on these datasets, which must be taken into account when
evaluating SLT models.</p>
      <p>The authors [18] described several techniques, commonly used in low resource machine
translation scenarios, for machine translation from spoken language text to sign language
glosses. Data augmentation, semi-supervised Neural Machine Translation, transfer learning
and multilingual NMT were used for the experiments. The results of experiments carried
out on two natural datasets including gloss annotation (RWTHPHOENIX-Weather 2014T
dataset and the Public DGS Corpus) indicate an increase BLEU metric to 6.18.</p>
      <p>
        In paper [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] the translation of Arabic sign language using ontology and deep learning
techniques are proposed. The authors implemented ontology on the sign language domain
to solve some SL challenges. The Arabic sign language dataset was developed. Experimental
results show that the classification accuracy of the training set increased from 98.06% to
98.6% and semantic recognition accuracy of the testing set increased from 88.87% to
94.31%.
      </p>
      <p>The paper [19] describes translation between German Sign Language glosses and
German written language. The authors focuses on the second-stage gloss translation
component, which is challenging due to the scarcity of publicly available parallel data. Their
approach is based on gloss translation as a low-resource machine translation task and
contains hyperparameter search and back-translation. For experiments, the authors use the
RWTH-PHOENIX-Weather 2014T dataset. The resulting gloss-text system improves over
the baseline system by a margin of 2.44 BLEU. The biggest problem, the authors note, is
limited parallel data.</p>
      <p>The scientific work [20] formalize German Sign Language Translation in the framework
of Neural Machine Translation for both end-to-end and pretrained settings (using expert
knowledge). To achieve NMT from sign videos, the authors employed CNN based spatial
embedding, various tokenization methods, to jointly learn to align, recognize and translate
sign videos to spoken text. The sign language translation dataset corpus (PHOENIX14T) was
assembled for conducting experiments. Using the end-to-end frame-level method and
glosslevel tokenization networks, a BLEU-4 score were achieved 9.58 and 18.13 respectively.</p>
      <p>In paper [21] authors described the research various deep learning–based methods for
encoding sign language as inputs, and analyzed the several machine translation methods
using three different sign language datasets. The authors use translation methods for
several sign languages, such as German Sign Language (GSL), American SL (ASL) and
Chinese SL (CSL). Developed by the authors transformer model outperformed all the other
sequence-to-sequence models on the GSL and CSL datasets using OpenPose features.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed methodology</title>
      <p>Statistical sign language machine translation systems use bilingual corpora that contain
complete sentences. Such corpora are used to train these statistical systems. But when it
comes to sign language, two main problems arise. The first problem is the lack of large
corpora. Existing corpora use gloss notation (one gloss = one sign), which is too complex to
learn. In addition, the inconsistent use of system notation complicates the task [13]. For
Ukrainian Sign language, we have created a corpus of more than 230 sentences, which
contains sentences in the Ukrainian Written Language and their translation into Ukrainian
Sign Language, taking into account the basic rules of translation. The second problem is the
lack of a standard for notation of signs.</p>
      <p>
        We will consider the model of statistical machine translation of sign language, which is
based on lexical translation of words, that is, word-to-word translation [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. This model uses
a bilingual corpus that contains sentences from Ukrainian sign language to Ukrainian
Written Language. To indicate signs, we will use glosses - the word is written in capital
letters. For example, we write the word UWL “жінка” (women) [zhinka] as follows for USL:
“ЖІНКА” (WOMEN) [ZHINKA].
      </p>
      <p>Due to the difference in the order of words in Ukrainian Written Language and Ukrainian
Sign Language sentences, the words need to be redistributed (aligned) during the
translation process. Here is an example of a sentence in which the words UWL and USL are
in the same order:</p>
      <p>All models of statistical machine translation are based on the principle of word
alignment. To align the positions of the words in the sentence, the alignment function is
used, which maps each UWL word in position i to the USL word in position  :  :  →  . For
the example above, the alignment function is  ∶ {1 → 1, 2 → 2, 3 → 4, 4 → 3} .
пишу наукову статтю (I am writing a scientific article)</p>
      <p>[Ya pyshu naukovu stattiu ]
Я ПИСАТИ СТАТТЯ НАУКОВА (I WRITE ARTICLE SCIENTIFIC)</p>
      <sec id="sec-3-1">
        <title>1 2 3 4 [YA PYSATY STATTIA NAUKOVA ]</title>
        <p>Also, when translating to align words, there are cases when one word in UWL can
correspond to many words in USL and vice versa (alignment function)  ∶ {1 → 1, 2 →
2, 3 → 2, 4 → 3}):
пишу
статтю (I am writing an article)</p>
        <p>[Ya pyshu stattiu ]
ПИСАТИ
2
СТАТТЯ (I WRITE ARTICLE)</p>
      </sec>
      <sec id="sec-3-2">
        <title>3 [YA PYSATY STATTIA ]</title>
        <p>4
3
статтю (I wrote an article)</p>
        <p>[Ya pysala stattiu]
UWL
USL
1
Я
Я
1
2</p>
        <p>3
1
Я</p>
        <p>2
писала
Я ПИСАТИ БУВ СТАТТЯ
1 2 3 4
(I WRITE WAS AN ARTICLE)
[YA PYSATY BUV STATTIA]</p>
        <p>The following rule is used to translate interrogative sentences in Ukrainian sign
language: question words (for example, "how", "where", "when", "why", "how much") are
always placed at the end of the sentence. For example, an interrogative sentence in USL is
rendered as follows (alignment function  ∶ {1 → 2, 2 → 3, 3 → 1, 4 → 4}):
1
Де
ти
2</p>
        <p>3 4
працюєш ? (Where do you work?)</p>
        <p>[De ty pratsiuiesh ?]
ТИ ПРАЦЮВАТИ ДЕ ? (YOU DO WORK WHERE ?)
1 2 3 4 [TY PRATSUVATY DE ?]
Another example an interrogative sentence with the same alignment:
UWL
Хто черговий сьогодні</p>
        <p>? (Who is on duty today?)
2
3
?</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>( 1,...,flf) of length   into a sentence UWL  = ( 1,...,ele) of length   with alignment of each
word of UWL   into a word USL   in accordance with the alignment function a: j →  as
follows [12]:
 (e,a| ) =</p>
      <p>є
(  + 1) 


 =1
∏  (  |  ( ))
(1)</p>
      <p>If there is a manually aligned corpora of parallel texts, then it is possible to estimate the
parameters of the IBM #1 model by maximum likelihood. Since this corpus for Ukrainian
Sign Language does not exist, we use the EM algorithm [12]. Let's consider the application
of the EM algorithm on a simple example (Figure 2).</p>
      <p>Я
(I
[YA
ПИСАТИ</p>
      <sec id="sec-4-1">
        <title>WRITE )</title>
        <p>PYSATY ]
ПИСАТИ СТАТТЯ
(WRITE ARTICLE)
[PYSATY STATTIA]</p>
        <p>НАУКОВА СТАТТЯ
(SCIENTIFIC ARTICLE)
[NAUKOVA STATTIA]
1
1
article) [Ya pyshu naukovu stattiu] the alignment matrix will look like this (Figure 3):</p>
        <p>F
Я
ПИСАТИ
СТАТТЯ</p>
        <p>Я
ПИСАТИ
НАУКОВА
ПИСАТИ
НАУКОВА</p>
        <p>Я
СТАТТЯ
e
я
я
я
пишу
пишу
пишу
наукову
наукову
статтю
статтю</p>
        <p>Alignment matrices can be generalized for the following sentence types (noun(pronoun),
verb, noun(pronoun)) and shown in Figure 4. Alignment matrix for the following sentence
types: noun(pronoun), verb, adjective, noun(pronoun) shown in Figure 5.</p>
      </sec>
      <sec id="sec-4-2">
        <title>NOUN(PRONOUN)</title>
      </sec>
      <sec id="sec-4-3">
        <title>VERB</title>
      </sec>
      <sec id="sec-4-4">
        <title>NOUN(PRONOUN) Noun(Pronoun) Verb Noun(pronoun)</title>
        <p>Noun(Pronoun)</p>
        <p>Verb</p>
        <p>Adjective</p>
        <p>Noun(pronoun)
Figure 5: Word alignment matrix for types of sentences: noun(pronoun), verb, adjective,
noun(pronoun)</p>
        <p>The words of the USL sentence (rows) are aligned to the words of the UWL sentence
(columns) as shown in the alignment matrix. Alignment may not always be one-to-one. In
Figure 6 shows an example when one word in UWL “писала” (wrote) [pysala] corresponds
to two words in USL “ПИСАТИ БУВ” (WRITE WAS) [PYSATY BUV].</p>
        <p>NOUN(PRONOUN) VERB AUXILIARY VERB NOUN(PRONOUN)
Noun(Pronoun)</p>
        <p>Verb</p>
        <p>Noun(pronoun)</p>
        <p>Finally, generalize the alignment matrices for interrogative sentence types, where the
question word is always placed at the end for Ukrainian Sign Language (see Figure 7). It
should be noted that this generalization applies only to these types of interrogative
sentences.</p>
        <p>NOUN(PRONOUN) VERB PRONOUN(ADVERB, NUMERAL)
?
Pronoun(adverb, numeral)</p>
        <p>Noun(pronoun)</p>
        <p>Verb</p>
        <p>?</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>An overview of the known methods of machine translation was carried out. A relatively new
method of neural machine translation was studied.</p>
      <p>The analysis of the obtained results showed the expediency of using statistical machine
translation for Ukrainian Sign Language. In particular, the IBM #1 model was used to
translate USL words into Ukrainian Written Language and the EM algorithm to align words
in sentences. Experiments were conducted on the corpora of parallel sentences "Ukrainian
Written Language - Ukrainian Sign Language", which contains 230 sentences. In addition,
alignment matrices for sentence structures of the same type are generalized.</p>
      <p>The scientific novelty consists in the creation of a corpus of parallel texts "Ukrainian
Written Language - Ukrainian Sign Language", the application of IBM model #1 for the
translation of sentences from this corpus.</p>
      <p>The practical value lies in the possibility of translation from Ukrainian Sign Language to
Ukrainian Written Language and vice versa for people with physical disabilities.</p>
      <p>The following studies will focus on a more detailed study of neural machine translation
for Ukrainian Sign Language.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <p>The research reported in this paper was supported by Information Systems and Networks
Department of Lviv Polytechnic National University. We would like to thank Lviv special
boarding school of Maria Pokrova for deaf children.
[10] R. San-Segundo, A. Pérez, D. Ortiz, L. F. D’Haro, M. I. Torres, F. Casacuberta, Evaluation
of Alternatives on Speech to Sign Language Translation, in: Proceedings of Interspeech
2007, Antwerp, Belgium, 2007, pp. 2529-2532.
[11] A. Othman, M. Jemni, Statistical Sign Language Machine Translation: from English
written text to American Sign Language Glos, IJCSI International Journal of Computer
Science Issues Vol. 8, Issue 5 (2011) 65-73.
[12] A. Othman, M. Jemni, Designing High Accuracy Statistical Machine Translation for Sign
Language Using Parallel Corpus: Case Study English and American Sign Language,
Journal of Information Technology Research, Vol. 12, Issue 2 (2019) 134–158. doi:
10.4018/JITR.2019040108
[13] Yu. G. Krivonos, Yu. V. Krak, A.V. Barmak, Information technology for modelling the</p>
      <p>Ukrainian sign language, Journal Artificial Intelligence, Issue 3(2009) 186-197.
[14] Yu. G. Kryvonos, I.V. Krak, O.V. Barmak, R.O. Bagriy, New Tools of Alternative
Communication for Persons with Verbal Communication Disorders, Cybernetics and
Systems Analysis, 52(5) (2016) 665–673.
[15] S. Z. Moe, Y. K. Thu, H. W. Hlaing, H. M. Nwe, N. H. Aung, H. A. Thant, N. W. Min, Statistical
Machine Translation between Myanmar Sign Language and Myanmar Written Text,
2018, MERAL Portal. URL: oai:meral.edu.mm:recid/00007665.
[16] A. Núñez-Marcos, O. Perez-de-Viñaspre, G. Labaka, A survey on Sign Language machine
translation, Expert Systems with Applications, Vol. 213, Part B (2023). doi:
10.1016/j.eswa.2022.118993
[17] M. De Coster, D. Shterionov, M. Van Herreweghe, J. Dambre, Machine translation
from signed to spoken languages: state of the art and challenges, Universal Access in
the Information Society (2024) 1-27. doi:10.1007/s10209-023-00992-1.
[18] D. Zhu, V. Czehmann, E. Avramidis, Neural Machine Translation Methods for
Translating Text to Sign Language Glosses, in: Proceedings of the 61st Annual Meeting
of the Association for Computational Linguistics Volume 1: Long Papers, July 9-14,
2023, pp. 12523–12541.
[19] X. Zhang, K. Duh, Approaching Sign Language Gloss Translation as a Low-Resource
Machine Translation Task, in: Proceedings of the 18th Biennial Machine Translation
Summit, Virtual USA, 1st International Workshop on Automatic Translation for Signed
and Spoken Languages, 2021, pp. 60-70.
[20] N. C. Camgoz, S. Hadfield, O. Koller, H. Ney, R. Bowden, Neural sign language translation,
in: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp.
7784-7793.
[21] T. Ananthanarayana, P. Srivastava, A. Chintha, A. Santha, B. Landy, J. Panaro, A. Webster,
N. Kotecha, S. Sah, T. Sarchet, R. Ptucha, I. Nwogu, Deep Learning Methods for Sign
Language Translation, ACM Transactions on Accessible Computing 14(4) 1-30.
doi:10.1145/3477498.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Morrissey</surname>
          </string-name>
          ,
          <article-title>Building a sign language corpus for use in machine translation</article-title>
          ,
          <source>in: Proceedings of the 4th Workshop on Representation and Processing of Sign Languages: Corpora for Sign Language Technologies</source>
          , Valetta, Malta,
          <year>2010</year>
          , pp.
          <fpage>172</fpage>
          -
          <lpage>177</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>N.</given-names>
            <surname>Veretennikova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Lozytskyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Vaskiv</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kunanets</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Legeza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Lozynska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kunanets</surname>
          </string-name>
          ,
          <article-title>Information and technology support for the training of visually impaired people</article-title>
          ,
          <source>in: Workshop proceedings of the 8th International conference on "Mathematics. Information technologies. Education"</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>307</fpage>
          -
          <lpage>320</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bungeroth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ney</surname>
          </string-name>
          , Statistical Sign Language Translation,
          <source>in: Workshop proceedings: Representation and Processing of Sign Languages</source>
          , Lisbon, Portugal,
          <year>2004</year>
          , pp.
          <fpage>105</fpage>
          -
          <lpage>108</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>O.</given-names>
            <surname>Lozynska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Davydov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pasichnyk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Veretennikova</surname>
          </string-name>
          ,
          <article-title>Rule-based machine translation into Ukrainian sign language using concept dictionary</article-title>
          ,
          <source>in: Proceedings of the 15th International conference</source>
          , Vol.
          <volume>2387</volume>
          :
          <article-title>ICT in education, research and industrial applications. Integration, harmonization and knowledge transfer, Kherson</article-title>
          , Ukraine,
          <year>2019</year>
          , pp.
          <fpage>191</fpage>
          -
          <lpage>201</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>P.</given-names>
            <surname>Koehn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Birch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Steinberger</surname>
          </string-name>
          ,
          <source>Machine Translation Systems for Europe, in: Proceedings of Machine Translation Summit XII</source>
          ,
          <year>2009</year>
          , pp.
          <fpage>65</fpage>
          -
          <lpage>72</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>E. K.</given-names>
            <surname>Elsayed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.R.</given-names>
            <surname>Fathy</surname>
          </string-name>
          ,
          <article-title>Sign Language Semantic Translation System using Ontology and Deep Learning</article-title>
          ,
          <source>International Journal of Advanced Computer Science and Applications</source>
          , Vol.
          <volume>11</volume>
          , No.
          <volume>1</volume>
          (
          <year>2020</year>
          )
          <fpage>141</fpage>
          -
          <lpage>147</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Miyazaki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Morita</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sano</surname>
          </string-name>
          ,
          <article-title>Machine Translation from Spoken Language to Sign Language using Pre-Trained Language Model as Encoder</article-title>
          ,
          <source>in: Proceedings of the 9th Workshop on the Representation and Processing of Sign Languages, Language Resources and Evaluation Conference (LREC</source>
          <year>2020</year>
          ), Marseille,
          <year>2020</year>
          , pp.
          <fpage>139</fpage>
          -
          <lpage>144</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Morrissey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Way</surname>
          </string-name>
          ,
          <article-title>Joining hands: developing a sign language machine translation system with and for the deaf community</article-title>
          ,
          <source>in: Proceedings of the Conference and Workshop on Assistive Technologies for People with Vision</source>
          and Hearing Impairments,
          <source>Assistive Technology for All Ages (CVHI-07)</source>
          , Granada, Spain,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.</given-names>
            <surname>Stein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dreuw</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Morrissey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Way</surname>
          </string-name>
          ,
          <article-title>Hand in Hand: Automatic Sign Language to Speech Translation</article-title>
          ,
          <source>in: Proceedings of Theoretical and Methodological Issues in Machine Translation (TMI-07)</source>
          ,
          <year>2007</year>
          , pp.
          <fpage>214</fpage>
          -
          <lpage>220</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>