<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>An Application for Hearing Impaired that Understands Signal Language from Hand Gestures</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Muhammed Dogan</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Berker Uysal</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pinar Kirci</string-name>
          <email>pinarkirci@uludag.edu.tr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Bursa Uludag University</institution>
          ,
          <addr-line>Gorukle, Bursa, 16285</addr-line>
          ,
          <country country="TR">Turkey</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Today, computers, which are used in all areas of life and have become an inseparable part of our lives, contribute to human life by using advanced object recognition and image processing technologies. It is actively used in various fields such as medicine, military, security, traffic, industry, agriculture, astronomy, retail, environmental safety, geodesy and photogrammetry and provides great benefits. Based on all these, the recognition of sign language by computers and its translation into voice and writing will facilitate the work of individuals with disabilities and those who do not know sign language in daily life. In this study, it is aimed to recognize sign language letters by using image processing technologies and to convert them into sound and text at the same time.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Sign language</kwd>
        <kwd>object recognition</kwd>
        <kwd>image processing</kwd>
        <kwd>machine learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Object recognition is a technology that is actively used and utilized in many areas of daily life. With
the development of technology from past to present, computers have taken over many jobs done by
human beings. The usage areas of computers have expanded to many sectors from agriculture to textile,
automotive and animal husbandry [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        Image processing and object recognition technologies are among the technologies that help people
the most. With the increasing workload in many areas, manpower has been insufficient and advanced
technologies such as image processing and object recognition have been used instead. The usage areas
of these technologies are increasing day by day [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Today, people with hearing and speech impairments communicate through sign language. Sign
language is very important for people with disabilities to communicate [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. They can easily
communicate with sign language, express their needs and chat.
      </p>
      <p>
        However, the work of people with disabilities, unfortunately, does not get easier when they know
sign language. The biggest problem they face is that the people around them do not know sign language
and therefore it becomes very difficult to communicate. Since sign language cannot be taught to all
people, the best way to overcome this communication problem is to use an app. With this application,
everyone will be able to understand each other easily without making any extra effort [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>First of all, the most important part is to know the sign language well, understand its structure, pay
attention to international standards and approach it as a whole. For the success of the application, it is
necessary to know exactly what the sign means and to treat it as a constant. These operations can be
done using object recognition technology. First of all, it is necessary to analyze the sign language letters
and create a database for signs using the alphabet.</p>
      <p>
        However, there are certain points to be noted in this regard. One of the most important points to be
decided will be the size of the database. The size of the database needs to be well adjusted. The fact that
the number of data is neither too little nor too much will negatively affect the situations and functionality
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        According to the data of the World Health Organization (WHO), there are 466 million
hearingimpaired individuals worldwide. 34 million of these people are children under the age of 15. For this
reason, different sign languages have been developed for hearing impaired individuals to communicate
in every period of history [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], a novel dynamic sign language recognition method was given. It utilizes trajectory and key
hand type to extract features. It adopts a key frame weighted DTW (dynamic time warping) algorithm
to implement hierarchical matching strategy. The method gradually matches sign language gestures
from two levels of trajectory and key hand type.
      </p>
      <p>
        The presented study in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] improved a smart wearable American Sign Language (ASL) interpretation
model. It used deep learning method. The presented model applied sensor fusion to integrate features
from six inertial measurement units (IMUs). The presented smart wearable ASL interpretation model
aimed to assist hearing-impaired person to communicate with society in best way.
      </p>
      <p>
        The study in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] focused on the use of the knowledge of phonology of Japanese sign Language (JSL)
and dictionary to improve a real-time JSL sign recognition system. The system employs Kinect v2
sensor to collect sign features: hand shape, position, and motion. Depth sensor provides real-time
processing and robustness against environmental changes.
      </p>
      <p>
        Sign language words composed of three elements, these are, hand’s motion, position, and shape, in
terms of phonology. In [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], a recognition system was developed for Japanese sign language (JSL) with
abstraction of manual signals based on these three elements. The abstraction of manual signals is
performed with Japanese sign language words dictionary. Features like coordinates of hands and depth
images are extracted from manual signals with the depth sensor and Kinect v2.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], it presented an algorithm for segmenting videos of signs into sequences of still images and
four techniques for Arabic sign language recognition: Modified Fourier Transform (MFT), Local
Binary Pattern (LBP), Histogram of Oriented Gradients (HOG), and combination of HOG and
Histogram of Optical Flow (HOG-HOF). And, these techniques are evaluated using Hidden Markov
Model (HMM).
      </p>
      <p>
        A survey on dynamic SLR was presented in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. It includes two categories, typically mentioning
HMM, some main datasets in variable languages and methods used for data preprocessing.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], American sign language is used. The paper works on helping specially abled people to
communicate with people who don’t know sign language with utilizing the approaches of computer
vision and deep learning. To solve this problem, the paper uses convolutional neural network. Firstly,
the paper focuses on capturing variable hand expressions in the form of video by the person and
translating them to text using a convolutional neural network. The other part targets on the reverse of
it, showing GIF upon converting text. And, integrating these two parts helps in two-way
communication.
      </p>
      <p>
        The research paper [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] compared three optimization types of gradient descent method, that are
stochastic gradient descent algorithm, adaptive gradient algorithm, root mean square propagation
algorithm. According to the results, it is shown that root mean square propagation (RMSProp) is that
the first-class optimizer to preserve the loss of model performs of Optimized Convolution Neural
Network (OCNN) capacity in managing sign language recognition.
      </p>
      <p>This paper focuses on state-of-the-art literature that identifies areas of interest in the non-visual
inputs, image frames, and video frames to determine the features for a particular hand gesture. The
literature survey also takes into account the approaches considered by researchers across different sign
languages like American Sign Language, Taiwanese Sign Language, etc. which will help to develop a
perspective for Indian Sign Language.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Sign language</title>
      <p>
        Communication has been a very important factor for people to get along with each other throughout
history. It can be defined as the transmission of information from one person to another, any kind of
exchange of meanings such as feelings, thoughts and ideas. Verbal communication, which is based on
hearing and speaking, is mostly used among people [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>
        But people with hearing loss have difficulty communicating with people with hearing loss or other
people like themselves. Sign languages have emerged to overcome this difficulty. Sign language is a
silent and visual language that requires us to use gestures and facial expressions that enable us to
communicate with individuals who do not have the ability to hear or speak [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        The earliest written records of sign language dates back to the 5th century BC. Until the 19th
century, most of the knowledge known about sign languages was limited to hand-made alphabets
produced to facilitate the transfer of words from oral language to sign language rather than documents
[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
      </p>
      <p>
        In the early 1770s in France, hand gestures used by hearing impaired individuals were accepted as
a grammatical language and started to be taught to people in schools. Later, this method used in the
language was carried to America by a French sign linguist. In 1817, Thomas Gallaudet established the
first sign language school in the United States, which gave education only to hearing-impaired
individuals [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        The gesture is an activity. It can be characterized as movement, hand or face frequency that
communicates thoughts, sentiments, or feelings, such as, raised eyebrows; shoulder movements are a
portion of the activities people use in their lives. Gesture-based communication is a more formal and
expressive type of correspondence. Here all words or letters are given a particular activity.
Gesturebased communication is an all-around coded demonstration of code; every activity has an appointed
significance. Communication with signing is the solitary method for correspondence for the hard of
hearing [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
      <p>Every country owns different Sign Languages. The gestures and movements of hands, fingers, and
the meaning they represent differs. There are many research and implementations are done on SL:
British Sign Language (BSL), Indian Sign Language (ISL), Chinese Sign Language (CSL), American
Sign Language (ASL). Implementations own the development of Automated Systems that convert or
translate the evolved languages. Also, ISL was a recently developed language.</p>
      <p>
        The paper [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] worked on the literature that identifies areas of interest in the non-visual inputs,
image frames, and video frames to determine the features for a particular hand gesture. The literature
survey considers the approaches worked by researchers across variable sign languages: American Sign
Language, Taiwanese Sign Language, etc. that helps to develop a perspective for Indian Sign Language.
      </p>
      <p>Human interpreters are not available for identifying ISL. Fort his reason, Sign Language Recognition
System (SLRS) is focussed on for the identification of ISL. By the way designing an SLRS for ISL is
very difficult when compared to other SL. ISL is very complex because it is composed of single- and
double handed gesture and owns an extensive vocabulary with similar gestures.</p>
      <p>
        The paper [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] presents a study of ISL, its syntax/vocabulary and different trending techniques for
designing an ISL recognition system.
      </p>
      <p>
        Sign language is a media for communicating with deaf and dumb people and it is not known by most
of the normal people. Thus, it is a difficult task to form a communication between normal people and
hearing impaired person. There are many tools are presented to help them, but unfortunately not produce
accurate results. To be able to communicate with them, various fingers’ gestures are used and then, a
designed model converts those gestures into words or alphabets into a specific language [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
      </p>
      <p>
        It is not known how many sign languages exist in the world. Generally, each country has its own
sign language. In some countries, this number is higher. Ethnologue counts 137 sign languages in its
2013 edition [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
      </p>
      <p>The Turkish Sign Language shown in Figure 1 is one of these sign languages. In the Ottoman
Empire, there are records that sign language was used in palaces, baths and even courts since the 16th
and 17th centuries. However, it is not known whether today's sign language is the sign language used
in the Ottoman period [19].</p>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed system</title>
      <p>In the study, it is aimed to add basic words in sign language to the system as well as letters. The
system also aims to convey the result displayed on the screen to the user audibly after completing the
necessary steps.</p>
      <p>Working stages of the system;
 Saving alphabet letters and basic words in database,
 Taking new images with the camera,
 Comparison of the received image with the database,
 Output of the value obtained as a result of the comparison,
 It can be explained as the last output to be transmitted to the user audibly.</p>
      <p>The default camera on the computer was used for the collection of images and recordings.
One of the limitations in the study is the environmental restriction. The environment has been a major
constraint factor in the use of images, as the work is constantly intertwined with processes such as image
data collection, processing and detection. The environment should be as uniform and stable as possible
in order to properly collect and perceive images.</p>
      <p>The operating performance of the system is largely dependent on the environmental factor. If there
is a lot of noise during object detection in the environment, the detection process becomes very difficult.
Since image processing operations are costly and demanding computer performance, they must be as
fast as possible. For this case, GPU rather than CPU should be used. In this case, having a good GPU
is an important criterion for determining system performance [20].</p>
      <p>In the project, the subjects of image processing, object recognition and machine learning were
studied intertwined. The main libraries, frameworks and plugins used; OpenCV, Tensorflow, Keras,
Pyqt, NumPy, gTTS and Mediapipe.</p>
      <p>PyCharm CE IDE (Integrated Development Environment) was used to develop the project. PyCharm
is an integrated development environment that is highly advanced especially for the Python language
and offers many advantages to the developer. After the IDE was installed, the libraries and frameworks
used in the project were added to the project.</p>
      <p>Since hand movements must be recognized in the project, the hand must first be perceived by the
program. In order for this process to take place, the camera is first activated and the camera is started
with the program in Python with the imshow command. Then the hand image on the camera is
customized for object detection.</p>
      <p>Three different methods were used in this part.</p>
      <p>
        In the first stage, hand tracking was performed using the Mediapipe library, that is, each joint point
in the skeletal system of the hand was shown as connected to each other with lines and points. In this
way, certain points of the hand can be controlled and interventions can be made where necessary [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>Object detection was performed by creating a certain area (frame) on the screen and masking the
image by filtering in this area.</p>
      <p>The main reason for filtering and masking in the hand detection process is to highlight the object to
be detected from other objects in the environment. If there is no image such as a green screen in the
background, the main object to be detected should be highlighted from other objects. This is an
important detail for both detection and tracking.</p>
      <p>In the project, the frame creation method without filtering was used for the hand detection process.
Adding an extra screen to detect the hand and evaluate it independently of environmental noise, it has
been kept constant in the background. The added screen used only a certain part of the normal screen,
so it was minimized to cover only the area that needs to be detected (an area large enough to show the
hand).</p>
      <p>Although using frames without a filter is effective in dataset creation and model training steps, it is
a method whose performance may vary in different backgrounds. For this, it is important that the
background is chosen by the user as a noise-free and flat background. However, in the researches, the
hand detection process was carried out without filtering or hand tracking in a certain frame area.
Therefore, this method was used in the study.</p>
      <p>After performing the object detection and filtering processes in the project, a dataset was created.
The dataset consists of sign language letters and words that should be briefly recognized. While creating
the dataset, we created and collected the data. First of all, when the application is run, the letter that is
wanted to be displayed on the frame screen is marked, and then the image on the frame screen is
photographed. While creating the dataset, the most important point was to take the images from every
angle and position.</p>
    </sec>
    <sec id="sec-4">
      <title>4. The presented scenario</title>
      <p>In the project, it is aimed to use 29 letters in the Turkish Sign Language alphabet and basic words in
sign language. By adding all the letters in the alphabet and additionally basic hand gestures, it is aimed
to put the letters side by side to form words.</p>
      <p>The purpose of adding hand gestures that are separate from the alphabet is to switch to detecting the
other letter in the word when the perception of a letter is completed. In this way, it is aimed to complete
the word.</p>
      <p>In the project, the Supervised Learning technique was used as a learning method. When the program
is run, a small and a large (main) screen opens. The words are started to be shown by taking the hand
into the green frame on the big screen.</p>
      <p>If the program does not see any word in the frame field, then it just writes blank on the screen and
waits for the word to be displayed. Afterwards, the words are displayed one by one and each displayed
word is shown in writing on the screen and transmitted audibly as soon as it is shown. When the end
sign is shown, the words displayed on the screen are reset and a new word is expected.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>With today's advancing technology and advanced tools, image processing has become widespread
and has been used frequently in every field. With the development and spread of image processing
technology, it has been used in many sectors, making people's work much easier, accelerated, automated
and considerably reduced the margin of error.</p>
      <p>With the use of image processing techniques, even objects that are very difficult for the human eye
to perceive and capture have become perceivable, processable and information can be obtained. One of
the most important features of image processing is that it enables us to obtain information about the
attributes of the desired object without the need to interact with the environment. Therefore, the present
study has advanced in this direction.</p>
      <p>In the study, a dataset was created to detect, transmit and recognize data very accurately. During the
training, we considered and analyzed the successful studies in designing the model and determined our
scenario accordingly. In the studies, the success of multi-layers in multi-class models was presented.
Therefore, it was preferred to use multiple layers with more than one class in the study. The results
showed that the study was successful in recognizing sign language letters by perceiving the hand.</p>
    </sec>
    <sec id="sec-6">
      <title>6. References</title>
      <p>[19] Türk İsaret dili, 2023. URL:
https://tr.wikipedia.org/wiki/T%C3%BCrk_%C4%B0%C5%9Faret_Dili
[20] A. N. Erkan, C. Keskin, L. Akarun, Etkileşimli Ara Yuzler Icin Gercek Zamanlı El Izleme ve
HMM Tabanlı Uc Boyutlu Hareket Tanima, in: Proceedings of the IEEE Conference on Signal
Processing and Communications Applications, SIU 2003, Istanbul, Turkey, pp.192-195.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>H.</given-names>
            <surname>Yakut</surname>
          </string-name>
          ,
          <source>Isaret Dili Harflerinin Goruntu Isleme Yontemleriyle Tanınması icin Bir Uygulama, Thesis</source>
          , Fırat University, Engineering Science Institute,
          <year>2013</year>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Oktekin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Cavus</surname>
          </string-name>
          ,
          <article-title>Isitme ve Konusma Engelli Bireyler için Isaret Tanıma Sistemi Gelistirme</article-title>
          , Folklor/Edebiyat (
          <year>2019</year>
          ) Vol:
          <volume>25</volume>
          , No:
          <fpage>97</fpage>
          -
          <lpage>1</lpage>
          ,
          <year>2019</year>
          /1
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A. Z.</given-names>
            <surname>Oral</surname>
          </string-name>
          , Turk Isaret Dili Cevirisi, Siyasal Kitabevi,
          <year>2016</year>
          ,
          <fpage>9786059221207</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Isaret</surname>
            <given-names>dili öğreniyorum</given-names>
          </string-name>
          ,
          <year>2022</year>
          . URL: https://isaretdili.ego.gov.tr/isaret-dili-tarihcesi/
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z. S.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. X.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <article-title>Research on Dynamic Sign Language Recognition Based on Key FrameWeighted of DTW</article-title>
          , W. Fu et al. (Eds.):
          <source>ICMTEL</source>
          <year>2021</year>
          , LNICST 388, pp.
          <fpage>11</fpage>
          -
          <lpage>20</lpage>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>B. G.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. Y.</given-names>
            <surname>Chung</surname>
          </string-name>
          ,
          <article-title>Study of Sign Language Recognition Using Wearable Sensors</article-title>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Singh</surname>
          </string-name>
          et al. (Eds.):
          <source>IHCI</source>
          <year>2020</year>
          , LNCS 12615, pp.
          <fpage>229</fpage>
          -
          <lpage>237</lpage>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Sako</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hatano</surname>
          </string-name>
          , T. Kitamura,
          <article-title>Real-Time Japanese Sign Language Recognition Based on Three Phonological Elements of Sign, C</article-title>
          . Stephanidis (Ed.):
          <article-title>HCII 2016 Posters</article-title>
          ,
          <string-name>
            <surname>Part</surname>
            <given-names>II</given-names>
          </string-name>
          , CCIS 618, pp.
          <fpage>130</fpage>
          -
          <lpage>136</lpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Awata</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sako</surname>
          </string-name>
          , T. Kitamura,
          <source>Japanese Sign Language Recognition Based on Three Elements of Sign Using Kinect v2 Sensor</source>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          Stephanidis (Ed.):
          <source>HCII Posters</source>
          <year>2017</year>
          ,
          <string-name>
            <surname>Part</surname>
            <given-names>I</given-names>
          </string-name>
          , CCIS
          <volume>713</volume>
          , pp.
          <fpage>95</fpage>
          -
          <lpage>102</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Alaaddin</surname>
            <given-names>I. Sidig</given-names>
          </string-name>
          , Hamzah Luqman,
          <article-title>Sabri A. Mahmoud, Arabic Sign Language Recognition Using Optical Flow-Based Features and</article-title>
          HMM,
          <string-name>
            <given-names>F.</given-names>
            <surname>Saeed</surname>
          </string-name>
          et al. (eds.),
          <source>Recent Trends in Information and Communication Technology, Lecture Notes on Data Engineering and Communications Technologies</source>
          <volume>5</volume>
          ,
          <year>2018</year>
          , DOI 10.1007/978-3-
          <fpage>319</fpage>
          -59427-9_
          <fpage>32</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          <article-title>Survey on Dynamic Sign Language Recognition</article-title>
          ,
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Bhatia</surname>
          </string-name>
          et al. (eds.), Advances in Computer,
          <source>Communication and Computational Sciences, Advances in Intelligent Systems and Computing 1158</source>
          ,
          <year>2021</year>
          , https://doi.org/10.1007/
          <fpage>978</fpage>
          -981-15-4409-5_
          <fpage>89</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Rakesh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bharadhwaj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. S.</given-names>
            <surname>Harsha</surname>
          </string-name>
          ,
          <article-title>Sign Language Recognition Using Convolutional Neural Network</article-title>
          ,
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Raj</surname>
          </string-name>
          et al. (eds.),
          <source>Innovative Data Communication Technologies and Application, Lecture Notes on Data Engineering and Communications Technologies</source>
          <volume>59</volume>
          ,
          <year>2021</year>
          , https://doi.org/10.1007/
          <fpage>978</fpage>
          -981-15-9651-3_
          <fpage>58</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Swarnkar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ambhaikar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. K.</given-names>
            <surname>Swarnkar</surname>
          </string-name>
          , U. Sinha,
          <article-title>Optimized Convolution Neural Network (OCNN) for Voice-Based Sign Language Recognition: Optimization and</article-title>
          <string-name>
            <surname>Regularization</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Joshi</surname>
          </string-name>
          et al. (eds.),
          <source>Information and Communication Technology for Competitive Strategies (ICTCS 2020), Lecture Notes in Networks and Systems 191</source>
          ,
          <year>2022</year>
          , https://doi.org/10.1007/
          <fpage>978</fpage>
          -981-16- 0739-4_
          <fpage>60</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M. F.</given-names>
            <surname>Karaca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bayir</surname>
          </string-name>
          , Turk Isaret Dili Incelemesi:
          <article-title>Iletisim ve Dil Bilgisi, Ulusal Egitim Akademisi Dergisi (</article-title>
          <year>2018</year>
          ) Vol.
          <volume>2</volume>
          , No.2
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>İsaret</given-names>
            <surname>Dili</surname>
          </string-name>
          ,
          <year>2023</year>
          . URL: https://tr.wikipedia.org/wiki/%C4%
          <article-title>B0%C5%9Faret_dili</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>R. K.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Rathi</surname>
          </string-name>
          ,
          <string-name>
            <surname>A Review</surname>
          </string-name>
          <article-title>Paper on Sign Language Recognition Using Machine Learning Techniques</article-title>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mathur</surname>
          </string-name>
          et al. (eds.),
          <article-title>Emerging Trends in Data Driven Computing and Communications, Studies in Autonomic, Data-driven and</article-title>
          <string-name>
            <surname>Industrial Computing</surname>
          </string-name>
          ,
          <year>2021</year>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>A.</given-names>
            <surname>Patil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kulkarni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Yesane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sadani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Satav</surname>
          </string-name>
          , Literature Survey:
          <article-title>Sign Language Recognition Using Gesture Recognition and Natural Language Processing</article-title>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Sharma</surname>
          </string-name>
          et al. (eds.),
          <source>Data Management, Analytics and Innovation, Lecture Notes on Data Engineering and Communications Technologies</source>
          <volume>70</volume>
          ,
          <year>2021</year>
          , https://doi.org/10.1007/
          <fpage>978</fpage>
          -981-16-2934-1_
          <fpage>13</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>S.</given-names>
            <surname>Das</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kr</surname>
          </string-name>
          . Biswas,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chakraborty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Purkayastha</surname>
          </string-name>
          ,
          <string-name>
            <surname>Intelligent Indian Sign Language Recognition Systems: A Critical Review</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Tuba</surname>
          </string-name>
          et al. (eds.),
          <source>ICT Systems and Sustainability, Lecture Notes in Networks and Systems</source>
          <volume>321</volume>
          , https://doi.org/10.1007/
          <fpage>978</fpage>
          -981-16-5987-4_
          <fpage>71</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>V.</given-names>
            <surname>Sannareddy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Barlapudi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. K. R.</given-names>
            <surname>Koppula</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. R.</given-names>
            <surname>Vuduthuri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. R.</given-names>
            <surname>Seelam</surname>
          </string-name>
          ,
          <string-name>
            <surname>Sign Language Recognition Using Convolution Neural Network</surname>
            ,
            <given-names>V. S.</given-names>
          </string-name>
          <string-name>
            <surname>Reddy</surname>
          </string-name>
          et al. (eds.),
          <source>Soft Computing and Signal Processing, Advances in Intelligent Systems and Computing 1413</source>
          ,
          <year>2022</year>
          , https://doi.org/10.1007/
          <fpage>978</fpage>
          -981-16-7088-6_
          <fpage>59</fpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>