<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A novel Arabic handwriting recognition system based on image matching technique</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>1st Maamar Kef</string-name>
          <email>kef@yahoo.fr</email>
          <email>lm kef@yahoo.fr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>2nd Leila Chergui</string-name>
          <email>pgleila@yahoo.fr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Sciences</institution>
          ,
          <addr-line>Universit Mostefa Benboulaid - Batna 2, Batna</addr-line>
          ,
          <country country="DZ">Algeria</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>-This paper presents a new off-line recognition system for Arabic handwritten words. The proposed system uses scale-invariant descriptor namely SIFT, and based on an image matching technique for achieving classification. The recognition process was done through a Keypoints matching procedure, using a nearest-neighbor distance-ratio. The paper presents also a new large Arabic handwritten word database. This database provides a new framework for benchmarking and gives a new freely available Arabic handwritten word dataset. Several tests have been performed using our new database and the well known IFN/ENIT database for comparison purposes. A high correct recognition rate was reported. Index Terms-Arabic handwriting recognition, Features extraction, SIFT descriptors, Keypoints matching, New Arabic database.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Automatic recognition of handwritten scripts is an area
of pattern recognition that is extremely useful in numerous
fields, including documentation analysis, mailing address
interpretation, bank check processing and more recently the
reconstruction and recognition of historical manuscripts.</p>
      <p>Recognition of Arabic handwriting remains one of the
most challenging problems in the pattern recognition domain.
Arabic is written by more than 240 million people, in over
20 different countries. The standard Arabic script contains 28
letters. Each letter has either two or four different shapes,
depending on it position within a word.</p>
      <p>
        One of the most challenging aspects of off-line handwriting
recognition is finding a good database that well represents
the variety of handwriting styles. Comparing with the great
number of existing databases for English script, IFN/ENIT
database [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] was the only freely accessible Arabic database;
this incited us to develop a new large database which will be
freely available for research and academic use.
      </p>
      <p>In this research we present a new fast and robust Arabic
handwriting recognition system based on SIFT descriptor and
a recognizing procedure that use keypoints matching. Contrary
to the majority of handwritten characters recognition systems,
the proposed method operates without any preprocessing steps,
since the used features are invariant regarding images’
transformations and are highly distinctive in a large database. We
also introduce a new large database of Arabic handwritten
words which provides a comparison tool for research works
in characters recognition domain.</p>
      <p>The remainder of this paper is divided into six sections. The
next section resumes several works done in handwritten Arabic
recognition field. Section 3 detail the feature extraction method
and section 4 describes our new Arabic handwritten words
database. Experimental results including keypoints detection
and matching are reported in section 5, where a comparative
analysis of the experimental results is also discussed. Finally,
some concluding remarks end the paper.</p>
      <p>
        The main idea of scale invariant feature descriptor (SIFT)
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] is resumed on detecting distinctive invariant features from
images that can be later used to perform reliable matching
between different views of an object or scene. Because of
the proved efficiency of the SIFT keypoint detector, a large
number of researcher are attracted further for expanding or
using these descriptors in many applications. In handwritten
recognition domain, SIFT was addressed in a few published
papers.
      </p>
      <p>
        Diem and Sablatnig [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] tried to solve the problem of
degraded handwritten characters recognition using SIFT
descriptors. In order to recognize a character, the local descriptors
are initially classified with a Support Vector Machine (SVM)
and then identified by a voting scheme of neighboring local
descriptors.
      </p>
      <p>
        De Campos [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] presented a solution to the problem of
recognizing characters in images of natural scenes. Such
situations could not be well handled by traditional OCR
(Optical Character Recognition) techniques. The problem is
addressed in an object categorization framework based on
a bag-of-visual-words representation. For feature extraction,
authors used SIFT and other descriptors.
      </p>
      <p>
        Zhang et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] proposed a novel SIFT based feature
for off-line handwritten Chinese character recognition. The
presented feature is a modification of SIFT descriptor taking
into account of the characteristics of handwritten Chinese
samples. MQDF classifier was used in classification phase and
showed that the proposed method outperforms original SIFT
feature and two traditional features, Gabor feature and gradient
feature.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] a new method for the off-line recognition of Tamil
handwriting characters based on local feature extraction was
investigated. Authors represented each character by a set of
local SIFT feature vectors.
      </p>
      <p>
        Character type classification on a document image problem
was addressed in [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. In that work, authors proposed a method
based on a probabilistic topic model and SIFT descriptor. The
character’ types are: mathematical formula, printed Japanese,
printed and handwritten English.
      </p>
      <p>
        Ramana et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] examined the issues in recognizing the
Devanagari characters in the wild like sign boards,
advertisements, logos, shop names, notices, and address posts. They
used a variation of SIFT, namely Dense SIFT features. These
are derived by densely sampling keypoints from the character
and extracting SIFT descriptors around them.
      </p>
      <p>
        Mao et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] incorporated SIFT descriptors in Chinese
calligraphy word style recognition domain (seal script, clerical
script, standard script, semi-cursive script and cursive script).
In this study, authors proposed a method based on K-Nearest
Neighbors (KNN) and feature vector filtering. Experiments
show that SIFT feature has better recognition result than that
of Gabor feature and GIST feature.
      </p>
      <p>
        For Arabic handwriting recognition, we found only one
work which uses SIFT as descriptor introduced by Rothacker
et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. They applied the Harris detector to extract coins and
for each coin, they detect keypoints using SIFT descriptors;
they also used a segmentation phase with a set of Hidden
Markov models.
      </p>
      <p>
        Aouadi and Kacem Echi [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] presented a new method for
Arabic handwritten word recognition. The authors extracted
some structural features from words image and trained a
classic right-left Hidden Markov Model. Experiments were
carried on a set of ancient Arabic manuscripts and the
IFNENIT standard database. An average recognition rate of 87%
was reported.
      </p>
      <p>
        Rabi et al. [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] presents a recognition system of Arabic
cursive handwriting using embedded training based on
hidden Markov models. The extracted features were based on
the densities of foreground pixels, concavity and derivative
features using sliding window, some of these features depends
on baselines estimation. the system achieved 87.93% of correct
recognition.
      </p>
      <p>II. SIFT DESCRIPTOR</p>
      <p>
        SIFT was developed by David Lowe in 2004 [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] as a
continuation of his previous work on invariant feature detection
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], and it presents a method for detecting distinctive invariant
features from images that can be later used to perform reliable
matching between different views of an object or scene. This
approach consists of four major computational stages (figure
1).
      </p>
      <p>Each of these stages are executed in a descending order
(cascade approach) and on every stage a filtering process is
applied so that only the keypoints that are robust enough are
allowed to pass to the next stage. According to Lowe, this
will reduce significantly the cost of detecting the features. The
descriptor is formed from a vector containing the values of all
the orientation histogram entries.</p>
      <p>For image matching and recognition, SIFT features are
first extracted from a set of learning images and stored in a
database. A new image is matched by individually comparing
features extracted from it to those previously stocked in the
database and finding candidate matching based on Euclidean
distances calculated from their feature vectors. The Euclidean
distance between the SIFT feature descriptors is considered as
a cost measure.</p>
      <p>The experiments conducted in this paper use a 4x4x8 = 128
elements in each feature vector of a keypoint. Regarding the
image matching procedure, the local descriptors from several
images are matched. A complete comparison is performed
by computing the Euclidean distance between all potential
matching pairs. A nearest-neighbor distance-ratio matching
criterion is then used to reduce mismatches.</p>
      <p>III. THE NEW ARABIC HANDWRITING WORD DATABASE
In order to make the databases as much representative as
possible, we have focused on most aspects responsible of
variations of handwriting styles like the age, the sex, the
educational level, the profession, the residence town, etc.</p>
      <p>Data collection was conducted using 2100 forms. Each
writer was asked to fill one form comprising 11 Algerian
village names, each word is written twice. Also, there is a field
for writer’s personal informations including; his name, his age,
his residence town, and his profession. Each form possesses
15 exemplars. An example of a filled form represented in a
grayscale level is shown in figure 2.</p>
      <p>
        All the extracted images have been archived in two different
formats: grayscale and binary formats in TIFF file format
at 300 dpi resolutions. The Arabic handwritten data were
sorted and saved into four sets. Figure 3 shows some statistics
database was used as a comparison tool to evaluate
researchers’ works during the three competitions of the ICDAR
(International Conference on Document Analysis and
Recognition) organized in 2005, 2007 and 2009 [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
    </sec>
    <sec id="sec-2">
      <title>A. Keypoints detection</title>
      <p>In our study, we are not interested by the matching of two
distinct images representing the same scene (or parts of the
same scene) taken from two different views; our aim is to
compare two images of two handwritten words whose similar
contents will be in the same area, for all images representing
a given word class.</p>
      <p>The suggested method divides vertically the word images to
be recognized into five frames of equal size. The objective here
is to compare the detected keypoints in a given frame with its
corresponding in another image representing the same word
class. Figure 4 shows an example.</p>
      <p>The number of frames was selected through different tests of
several scenarios and their impact on the recorded recognition
rate (table 1).</p>
      <p>For each word class, we build a model of keypoints using 25
images as training samples. Each class model contains a given
number of keypoints divided into five subsets, representing the
different frames composing the word images. The construction
process of each class model is detailed in the flow chart
presented in figure 5. This process allows us to filter and
improve the robustness of keypoints extracted from the training
images of a given class.</p>
      <p>The number of training images used to build each class
model was also fixed through several experiments. We noticed
that using more than 25 images during the learning process
will increase the number of detected keypoints without
bringing a significant improvement to the recognition rate (figure
6).
concerning the number of words, sub-words and characters in
each set.</p>
      <p>A set of 128 features are extracted for each keypoint,
since a keypoint descriptor consists of eight 4x4 orientation
histograms. Figure 7 presents the keypoints detection process</p>
      <p>IV. A NOVEL RECOGNITION SYSTEM using SIFT descriptors for the five frames representing a word
In order to show the efficiency of the proposed system, image taken from our database.
experimental tests were achieved on both databases; the Several tests were conducted in order to determine the
IFN/ENIT and our new database. IFN/ENIT was produced matching ratio; this parameter fixes the matched keypoints’
by the Institute for Communications Technology at Technical number which affects the recognition rate. Tests show that the
University of Braunschweig (Institut fu¨ r Nachrichtentechnik, number of keypoints and the matching ratio are rising at the
IFN) and the l’Ecole Nationale d’Ine´gnieurs de Tunis. This same time (figure 8), but the discriminating capacity of these
keypoints decreased. Figure 9 shows that Keypoints matching
becomes more efficient when the matching ratio is fixed to 0.9
even if the number of keypoints is reduced. Worse still, the
recognition rate tends to decrease when the ratio gets higher
values.</p>
      <p>The number of keypoints representing each model of the
200 used classes, with which the system registered the highest
recognition rate, is given in figure 10.
from features vectors by comparing the Euclidean distance of
the closest neighbor to that of the second closest neighbor.</p>
      <p>Keypoints matching of the five frames representing an image
pair is illustrated in figure 11.</p>
      <p>Once the keypoints were detected in two images, they
should be paired. The best candidate match for each keypoint
in the first image is found by identifying its nearest neighbor in
the second one. In this work, matching keypoints are calculated</p>
      <p>In the recognition process, each image of test must be firstly
divided into five frames, then the keypoints are calculated
for each frame. The matching process is then performed as
follows:</p>
      <p>Repeat the following steps for each class model and each
test image:
1) Each frame representing a part of a test image is compared with its correspondent part of a class model.
2) The matched keypoints rate (MKR) is then calculated for each frame as follows:</p>
      <p>M KR =</p>
      <p>matched keypoints’ number
detected keypoints’ number from a test image + model keypoints’ number
3) An average matching rate (AMR) is then established:</p>
      <p>Finally, the model recording the highest average matching
rate will be considered as the target class. Figure 12 shows an
example summarizing these stages.</p>
      <p>The keypoint descriptors are highly distinctive, which
allows a single feature to find its correct match with good
probability in a large database of features.</p>
      <p>Tests conducted on both databases (IFN/ENIT and our
new database) are listed in table 2, where we can observe
that the system registered high performances with scalability,
since a slight loss of approximatively 8% of the accuracy
was registered when the number of classes that have to be
recognized increased from 40 to 200. We also noticed that a
small improvement of the recognition rate was reported during
tests done on our new database compared to the IFN/ENIT
database.</p>
    </sec>
    <sec id="sec-3">
      <title>C. Results comparison</title>
      <p>In order to prove the efficiency of the proposed method, we
compare the obtained results with some pertinent works done
on handwritten Arabic words recognition. However, only the
systems tested on IFN/ENIT database have been mentioned.
The reported results (table 3) show that our proposed system</p>
      <p>The contribution of this paper is twofold. Firstly, a new large
and free database for Arabic handwriting words is presented.
Secondly, an effective and robust off-line handwritten Arabic
words recognition system is presented and evaluated on this
new database.</p>
      <p>The developed sytem use a new type of features, namely
SIFT descriptors and an efficient recognition method based on
(1)
(2)
KNN
HMM</p>
      <p>HMM
Matching based on
Euclidean distance
an image matching procedure. A heigh recognition rate was
recorded through several experiments conducted on IFN/ENIT
and our new database.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>H.</given-names>
            <surname>Al Abed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Margner</surname>
          </string-name>
          , ”
          <article-title>ICDAR 2009 - Arabic handwriting recognition competition</article-title>
          ,”
          <source>International Journal on Document Analysis and Recognition</source>
          , Springer, vol.
          <volume>14</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>13</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>N.</given-names>
            <surname>Azizi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Farah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Khadir</surname>
          </string-name>
          , M. Sellami, ”
          <article-title>Arabic handwritten word recognition using classifiers selection and feature extraction/selection,”</article-title>
          <source>Proc. The 17th IEEE Conference in Intelligent Information System, Proceedings of Recent Advances in Intelligent Information Systems</source>
          , Academic Publishing House, Warsaw, pp.
          <fpage>735</fpage>
          -
          <lpage>742</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Burrow</surname>
          </string-name>
          , ”
          <article-title>Arabic handwriting recognition</article-title>
          ,
          <source>” Thesis</source>
          , School of Informatics, University of Edinburgh,
          <year>2004</year>
          , England.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>T.E. De Campos</surname>
            ,
            <given-names>B.R.</given-names>
          </string-name>
          <string-name>
            <surname>Babu</surname>
          </string-name>
          , M. Varma, ”
          <article-title>Character recognition in natural images</article-title>
          ,
          <source>” Proc. The International Conference on Computer Vision Theory and Applications</source>
          , Lisbon, Portugal, vol.
          <volume>2</volume>
          , pp.
          <fpage>273</fpage>
          -
          <lpage>280</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Diem</surname>
          </string-name>
          , R. Sablatnig, ”
          <article-title>Recognition of degraded handwritten characters using local features</article-title>
          ,
          <source>” Proc. The 10th International Conference on Document Analysis and Recognition</source>
          , Barcelona, Spain, pp.
          <fpage>221</fpage>
          -
          <lpage>225</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>D. G.</given-names>
            <surname>Lowe</surname>
          </string-name>
          , ”
          <article-title>Object recognition from local scale-invariant features</article-title>
          ,
          <source>” Proc. of the International Conference on Computer Vision</source>
          , Corfu, Greece, pp.
          <fpage>1150</fpage>
          -
          <lpage>1157</lpage>
          ,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D. G.</given-names>
            <surname>Lowe</surname>
          </string-name>
          , ”
          <article-title>Distinctive image features from scale-invariant keypoints</article-title>
          ,”
          <source>International Journal of Computer Vision</source>
          , vol.
          <volume>60</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>91</fpage>
          -
          <lpage>110</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lin</surname>
          </string-name>
          , ”
          <article-title>Calligraphy word style recognition by KNN based feature library filtering</article-title>
          ,
          <source>” Proc. The 3rd International Conference on Multimedia Technology, uangzhou, China</source>
          , pp.
          <fpage>934</fpage>
          -
          <lpage>941</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>O. V.</given-names>
            <surname>Ramana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Roy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Narang</surname>
          </string-name>
          , M. Hanmandlu, ”
          <article-title>Devanagari character recognition in the wild</article-title>
          ,”
          <source>International Journal of Computer Applications</source>
          , vol.
          <volume>38</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>38</fpage>
          -
          <lpage>45</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>L.</given-names>
            <surname>Rothacker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Vajda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Fink</surname>
          </string-name>
          , ”
          <article-title>Bag-of-features representations for offline handwriting recognition applied to Arabic script</article-title>
          ,
          <source>” Proc. The 3rd International Conference on Frontiers in Handwriting Recognition</source>
          , Bari, Italy, pp.
          <fpage>149</fpage>
          -
          <lpage>154</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Subashini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kodikara</surname>
          </string-name>
          , ”
          <article-title>Novel SIFT-based codebook generation for handwritten tamil character recognition</article-title>
          ,
          <source>” Proc. The 6th International Conference on Industrial and Information Systems</source>
          , Sri Lanka, pp.
          <fpage>261</fpage>
          -
          <lpage>264</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>T.</given-names>
            <surname>Yamaguchi</surname>
          </string-name>
          , M. Maruyama, ”
          <article-title>Character type classification via probabilistic topic model</article-title>
          ,”
          <source>International Journal of Signal Processing, Image Processing and Pattern Recognition</source>
          , vol.
          <volume>5</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>123</fpage>
          -
          <lpage>140</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , L. Jin,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Gao</surname>
          </string-name>
          , ”
          <article-title>A novel feature for offline handwritten Chinese character recognition</article-title>
          ,
          <source>” Proc. The 6th International Conference on Industrial and Information Systems</source>
          , Sri Lanka, pp.
          <fpage>763</fpage>
          -
          <lpage>767</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>N.</given-names>
            <surname>Aouadi</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Kacem</surname>
          </string-name>
          <string-name>
            <surname>Echi</surname>
          </string-name>
          , ”
          <article-title>Word Extraction and Recognition in Arabic Handwritten Text</article-title>
          ,”
          <source>International Journal of Computing &amp; Information Sciences</source>
          , vol.
          <volume>12</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>17</fpage>
          -
          <lpage>23</lpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Rabi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Amrouch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Mahani</surname>
          </string-name>
          , ”
          <article-title>Recognition of cursive Arabic handwritten text using embeddedtraining based on HMMs,”</article-title>
          <source>Journal of Electrical Systems and Information Technology</source>
          ,
          <year>2017</year>
          (article in press).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>